Date post: | 02-Apr-2018 |
Category: |
Documents |
Upload: | sajidalisajid |
View: | 222 times |
Download: | 0 times |
of 26
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
1/26
APPLICATION DELIVERY
Scaling Blue Coat CacheFlow Using Brocade
ServerIron in Service Provider Environments
Provides best practices for deploying Brocade networking solutions with
the Blue Coat CacheFlow appliance. Brocade ServerIron ADX applicationdelivery controllers implemented in Cache Switching mode provide end-to-
end networking to scale a farm of Blue Coat CacheFlow appliances.
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
2/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 2 of 26
CONTENTS
Introduction......................................................................................................................................................................................................................................... 3Brocade TurboIron ......................................................................................................................................................... 4Blue Coat Solution Configuration .................................................................................................................................. 4
Best Practices for TCS Configuration ......................................................................................................................................................................................... 5Cache Load Balancing ................................................................................................................................................... 6IP Spoofing Configuration .............................................................................................................................................. 7
Step 1 ...................................................................................................................................................................... 8Step 2 ...................................................................................................................................................................... 8Step 3 ...................................................................................................................................................................... 9Step 4 ...................................................................................................................................................................... 9Step 5 .................................................................................................................................................................... 10Step 6 .................................................................................................................................................................... 10Step 7 .................................................................................................................................................................... 11Step 8 .................................................................................................................................................................... 11
Appendix A: Sizing Tables ........................................................................................................................................................................................................... 12Appendix B: Load Balancer Scaling ........................................................................................................................................................................................ 13
Configuration Considerations ...................................................................................................................................... 14Basic Example of PBR ................................................................................................................................................. 15ServerIron 1 ................................................................................................................................................................. 17ServerIron 2 ................................................................................................................................................................. 22
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
3/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 3 of 26
INTRODUCTION
Service providers face many challenges maintaining profitability in an extremely competitive market
environment. To remain competitive, they must:
Manage bandwidth more effectively to monetize their infrastructure efficiently and contain costs Achieve a faster Return on Investment (ROI) on infrastructure spending Offer a consistently reliable and interactive user experience to drive customer loyalty Defend against malware, which can compromise user data or disrupt the network Meet consumer demands to filter harmful or inappropriate Web content Grow their business with high-value managed servicesTogether, Brocade and Blue Coat provide a service-oriented platform that enables service providers to
enhance and personalize the end user Web experience, reduce operational costs, and create new
opportunities to monetize network traffic through revenue-generation services.
The scenario described in this paper is a representative real-world service provider use case requiring highbandwidth carrier-class caching and filtering. The joint solution uses technologies such as client IP spoofing
by a farm of Blue Coat CacheFlow appliances balanced by a pair of High Availability (HA) Brocade
ServerIron switches, connected to a Point of Presence (PoP) router running Policy-Based Routing (PBR).
The solution improves performance by using 10 Gigabit Ethernet (GbE) connectivity between the PoP and
the Blue Coat CacheFlow appliances.
Policy-Based Routing provides a mechanism for implementing routing of data packets based on the policies
defined by network administrators. As deployed in this use case, PBR allows the definition of a flow between
the client and the Internet server handling the request to ensure that traffic is directed through the Brocade
ServerIron and cache servers in both directions. This represents a layered approach to implementing
policies and thresholds in order to achieve optimal cache load balancing.
In a carrier environment, the origin server frequently performs authentication and accounting based on the
client IP address. Cloud services such as Gmail may block sessions with a cache server IP address. In such
cases, cache servers such as the Blue Coat CacheFlow appliance need to spoof client IP addresses and
send Internet requests using client IP addresses.
Brocade has a complete networking solution that includes Layer 4-7 application delivery switches and
Layer 2/3 switches and routers with PBR capability, which is optimal for most Blue Coat CacheFlow
deployments, as shown in Figure 1. Brocade has gained experience supporting deployment of a large
number of end-to-end networking solutions in Telco environments. All Brocade equipment is managed by
Brocade IronView Network Manager (INM) software.
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
4/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 4 of 26
Brocade TurboIron
A Layer 2 device is required for the 10 GbE interconnect among all the devices in the Blue Coat CacheFlow
design. The Brocade TurboIron Series Layer 2 switches support 24 ports of full line-rate 10 GbE fiber or
copper (depending on SFP+ model used) with low latency (less 1.5 uS). It interfaces seamlessly with the
Brocade ServerIron ADX and NetIron XMR using fiber connectivity. The Brocade TurboIron is built for
mission-critical data; it is highly available with redundant load sharing, hot-swappable, sensing/switching
power supplies, and triple-fan assembly. The Brocade TurboIron is designed for highly efficient power and
cooling with front-to-back airflow and automatic fan speed adjustment. For more information, visit:
www.brocade.com> Products & Solutions and choose TurboIron 24X from the Select a Product menu
on the left.
Blue Coat Solution Configuration
Figure 1. HA solution configuration showing Brocade and Blue Coat devices and traffic flow
For more information on the Brocade NetIron XMR, visitwww.brocade.com> Products & Solutions and
choose Brocade NetIron XMR Series from the Select a Product menu on the left.
Internet
Backbone/IP Transit providers
Brocade XMR are switches shown as
core and border routers, but can also
be any SP router that supports PBR
Subscriber Requests
Borderrouters
Corerouters
Access Provider
NetworkPort 80 trafficPBR
CacheFlow 5000farm
CacheFlow 5000farm
BrocadeServerIron
ADX
BrocadeTurboIron
BrocadeServerIron
ADX
BrocadeTurboIron
http://www.brocade.com/http://www.brocade.com/http://www.brocade.com/http://www.brocade.com/http://www.brocade.com/http://www.brocade.com/http://www.brocade.com/7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
5/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 5 of 26
BEST PRACTICES FOR TCSCONFIGURATION
Transparent Cache Switching (TCS) allows network administrators to quickly deploy caches anywhere in the
network with no modification to end-user browsers or other software. Brocade ServerIron devices are
placed in a dual-arm design with 10 GbE connections to the LAN in the PoP. A dual-arm design is used
because it is less intrusive in the network. These ports can be used in a LAG to provide increased bandwidth
and high reliability. It is assumed that the devices on the LAN in the PoPs can perform Policy-Based Routingwithout negatively impacting the performance of the devices. Brocade ServerIron devices are configured in
a High Availability pair with session synchronization between them, which ensures that a failure of one unit
will not result in session loss.
Figure 2. Solution TCS configuration
The Brocade ServerIron pair is also configured with two Virtual Router Redundancy Protocol (VRRP)
instances. One is used to send traffic to the ServerIron pair from the outside; the other is used as the
default gateway for the caches. The use of VRRP (unlike proprietary implementations) provides a standards-
based mechanism for Layer 3 failover between the ServerIron switches.
Two PBR policies are configured on a router in the PoP:
One is designed to catch outbound user traffic and is configured to route traffic with a source IPaddress as the client pool and destination TCP port 80 to a next hop as the VRRP address on the
ServerIron pair.
The other is designed to catch return traffic and direct it to the cache setup. This PBR is designed toroute traffic that has a destination IP address as client pool and source TCP port 80 to a next hop as
the VRRP address on the ServerIron pair. Since the caches are doing IP spoofing and requesting
content from the Internet using the client IP addresses, without this PBR, return traffic would bypass
the cache setup and go directly to the client, which would then fail.
Brocade ServerIron HA pair
- Dual arm design with 10 GbE links
- Configured with 2 VRRP VRIDs: VRRP-A and VRRP-B
- VRRP-A is the outside VRRP, both PBRs have
a next hop of this VRRP
- VRRP-B is the inside VRRP, GW for the cache
- Both ServerIrons perform session synch for HA
PBR configured to route incoming trafficfrom Internet with dest. IP address of the
client and source TCP port 80 to a next
hop of VRRP-A on the ServerIron pair
PoP network
Server
Internet
Access
Client
Cache farm
Caches are configured with
default GW as VRRP-B on
the ServerIron pair
PBR configured to route incoming traffic
from client with source IP address of the
client and dest.TCP port 80 to a next
hop of VRRP-A on the ServerIron pair
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
6/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 6 of 26
The default gateway for the caches is configured as a VRRP address on the Brocade ServerIron pair. All
requests and the return traffic that goes back to the clients pass through the ServerIron switches. It is
essential that traffic between the caches and the Internet pass through the Brocade ServerIron switches.
Since the caches are spoofing client IP addresses, the ServerIron switches can keep a session entry that
they use for scaling the infrastructure.
Brocade ServerIron switches are carrier-class devices and have the following characteristics:
They do not rely on an off-the-shelf operating system such as Linux. Rather they use an operatingsystem specifically designed for load balancing switches.
They do not have any moving parts, such as hard disks, and hence have better Mean Time BetweenFailure (MTBF) ratings, which makes them more suitable for mission-critical operation.
They have very fast boot time, which allows the service to restart quickly should service be interrupted. The control plane and the data planes are separate, which means that the control plane is not
impacted by the load or issues on the data plane. This also means that you can add resources on the
data plane by adding an Application Switching Mode (ASM), for example, without any changes on I/O
ports, and the same reasoning applies to adding I/O ports without making any changes on the control
plane.
Cache Load Balancing
Brocade recommends the use of a layered approach to policies and thresholds to achieve optimal load
balancing of the caches as described below:
Multimedia content. The first layercontrols sites with multimedia content, such as youtube.com, whichcan put a huge strain on the caching setup. It is advisable to segregate caches serving content delivery
sites from the rest of the caches. Traffic destined for these sites is sent to a group of caches that serve
the content. This allows the other caches to serve other types of content without being overloaded with
the heavy bitstream traffic generated by content delivery sites.
Load balancing. The second layer is for load balancing, which uses IP address-based hashing. Thehashing feature uses source and destination hash mask. The hash mask determines how many of the
source and destination IP addresses are used by the hash function. The Brocade ServerIron uses the
hash masks to select a cache server. Brocade recommends using destination-IP hash mask, which
minimizes duplication of content on the cache servers by ensuring that a particular Web site is always
cached on the same cache server.
Control. The next layer adds more control. Some sites tend to be loaded more heavily than other sitesand this impacts some caches more than others, based on the destination IP address hashing. The
Brocade ServerIron can weight cache load balancing to add a tuning step after setups are stable.
Caches for the high-load sites are given lower weights, and caches sent to lower-load sites (based on
the hashing mechanism) have increased weighting. Overall, this leads to more lightly loaded sites,
which in turn results in overall leveling of the load across cache servers.
Maximum connections. The next layer of control involves specifying the maximum number ofconnections that a given cache server can handle. By setting a limit, a condition in which the capacitythreshold of a cache server is exceeded can be avoided. If any of the caches reaches its set threshold,
the Brocade ServerIron will stop forwarding new requests to the loaded cache server, thereby
decreasing the number of connections on the cache and releasing it from load. The requests that
would have been handled by the cache that reached its threshold are load balanced to the other
caches in the group. When all the caches reach their limit, as a last resort ServerIron can forward the
requests to the Internet to ensure service availability.
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
7/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 7 of 26
IP Spoofing Configuration
The Brocade ServerIron was designed to take into account IP spoofing, which entails sizing as well as
IP spoofing support on the load balancer.
In TCS, when a client makes a request for HTTP content on the Internet, the ServerIron directs the request
to a cache server, rather than to the Internet. If the requested content is not on a cache server, it is
obtained from an origin Web server on the Internet, stored on a cache server to accommodate future
requests, and sent from the cache server back to the requesting client.
When a cache server makes a request for content from the origin server, it can do one of the following:
The cache server replaces the requesting client's IP address with its own before sending the request tothe Internet. The origin server then sends the content to the cache server. The cache server stores the
content and sends it to the requesting client, changing the source IP address from its own to the origin
server's IP address.
The cache server does not replace the requesting client's IP address with its own. Instead, the cacheserver sends the request to the Internet using the requesting client's IP address as the source. This
allows the origin server to perform authentication and accounting based on the clients IP address,
rather than the cache servers IP address. This functionality is known as cache server IP spoofing.
When cache server spoofing support is enabled, the Brocade ServerIron performs the following actions with
requests sent from a cache server to the Internet:
1. The ServerIron looks at the MAC address to see if the packet is from a cache server and uses an ARPrequest to get the MAC address of each configured cache server.
2. If the MAC address indicates that the packet is from a cache server, the ServerIron checks the sourceIP address. If the source IP address does not match the cache server's IP address, the ServerIron
concludes that this is a spoofed packet.
3. The ServerIron creates a session entry for the source and destination (IP address, port) combination,and then sends the request to the Internet.
4. When the origin server returns the content, the ServerIron looks for a session entry that matches thepacket. If the session entry is found, the ServerIron sends the packet to the appropriate cache server.
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
8/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 8 of 26
The complete traffic flow is illustrated in the following figures.
Step 1
Step 2
Brocade
ServerIron pair
- Client requests sent to
ServerIrons by PBR on a
router in the PoP network
- Next hop for the PBR is a VRRPaccress on the ServerIron pair
PoP network
Servers
Internet
Access
Clients
Cache farm
Router MAC SI MAC Client IP Server IP
Source MAC Dest. MAC Source IP Dest. IP
Brocade
ServerIron pair
- The SI selects a cache
based on the load balancing/
hashing mechanism
PoP network
Servers
Internet
Access
Clients
Cache farm
SI MAC Cache MAC Client IP Server IP
Source MAC Dest. MAC Source IP Dest. IP
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
9/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 9 of 26
Step 3
Step 4
Brocade
ServerIron pair
- The SI sees traffic from the cache
MAC but not with the cache IP
- The SI knows that this is a
spoofed session
- The SI creates a session entry in its
seccion table to record which cache
to send the return traffic to
PoP network
Servers
Internet
Access
Clients
Cache farm
Cache MAC SI MAC Client IP Server IP
Source MAC Dest. MAC Source IP Dest. IP
Brocade
ServerIron pair
- The SI routes the requests
using its routing table
PoP network
Servers
Internet
Access
Clients
Cache farm
SI MAC Router MAC Client IP Server IP
Source MAC Dest. MAC Source IP Dest. IP
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
10/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 10 of 26
Step 5
Step 6
Brocade
ServerIron pair
- The return traffic is routed back
to the SI by a policy-based route
PoP network
Servers
Internet
Access
Clients
Cache farm
Router MAC SI MAC Server IP Client IP
Source MAC Dest. MAC Source IP Dest. IP
Brocade
ServerIron pair
- The SI checks its session table
and forwards the return traffic
to the cache that initiated it
PoP network
Servers
Internet
Access
Clients
Cache farm
SI MAC Cache MAC Server IP Client IP
Source MAC Dest. MAC Source IP Dest. IP
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
11/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 11 of 26
Step 7
Step 8
Brocade
ServerIron pair
- Cache responds back to the client
through its gateway, the SI
PoP network
Servers
Internet
Access
Clients
Cache farm
Cache MAC SI MAC Server IP Client IP
Source MAC Dest. MAC Source IP Dest. IP
Brocade
ServerIron pair
- The SI routes the traffic
back to the client
PoP network
Servers
Internet
Access
Clients
Cache farm
SI MAC Router MAC Server IP Client IP
Source MAC Dest. MAC Source IP Dest. IP
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
12/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 12 of 26
APPENDIX A:SIZING TABLES
The Brocade NetIron XMR, which supports PBR, is recommended to function as core and border routers. For
more information, contact Brocade Strategic Alliances:[email protected]).
Table 1. Peering bandwidth and ports required
Peering Bandwidth (x2) # CacheFlow Servers # Layer 2 ports (1 side) # Layer 4-7 Ports
1 GbE 3 5 6
5 GbE 15 17 6
10 GbE 30 32 6
Table 2. Layer 2 switches
Layer 2 Switch
Configuration
Model (catalog) Description Qty Qty
(total)
1 GbE peering
Layer 2 ports (5) TI-24X-AC 24P 10GBE/1GBE,SFP+,TI,BR A 1 2
10G-SFPP-TWX-0308 DIRECT ATTACHED SFPP
COPPER,3M,8-PACK A
1 2
5 GbE peering
Layer 2 ports (17) TI-24X-AC 24P 10GBE/1GBE,SFP+,TI BR A 1 2
10G-SFPP-TWX-0308 DIRECT ATTACHED SFPP
COPPER,3M,8-PACK A
2 4
10 GbE peering
Layer 2 ports (32) TI-24X-AC 24P 10GBE/1GBE,SFP+,TI BR A 2 4
Table 3. Layer 4-7 switches
Layer 2 Switch
Configuration
Model (catalog) Description Qty Qty
(total)
1 GbE peering
Brocade ADX 4000 SI-4000-DC 4RU CHASSIS, 1 DC PS, 1 SF, FAN 1 2
** SI-4XG 4* 10GBIT XFP LINE CARD, SI
CHASSIS
2 4
5 GbE peering
Brocade ADX 4000 SI-4000-DC 4RU CHASSIS, 1 DC PS, 1 SF, FAN 1 2
** SI-4XG 4* 10GBIT XFP LINE CARD, SI
CHASSIS
2 4
10 GbE peering
Brocade ADX 4000 SI-4000-DC 4RU CHASSIS, 1 DC PS, 1 SF, FAN 1 2
mailto:[email protected]:[email protected]:[email protected]:[email protected]7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
13/26
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
14/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 14 of 26
Configuration Considerations A PBR policy on an interface takes precedence over a global PBR policy. You cannot apply PBR on a port if that port already has inbound ACLs, inbound ACL-based rate limiting,
or TOS-based Quality of Service (QoS).
The number of route maps that you can define is limited by the system memory. When a route map isused in a PBR policy, the PBR policy uses up to 64 instances of a route map, up to 5 ACLs in a
matching policy for each route map instance.
ACLs with the log option configured should not be used for PBR purposes. PBR ignores implicit deny ip any any ACL entries, to ensure that for route maps that use multiple ACLs,
the traffic is compared to all the ACLs. However, if an explicit deny ip any any is configured, traffic
matching this clause will be routed normally using Layer 3 paths and will not be compared to any ACL
clauses that follow this clause.
PBR always selects the first next hop from the next hop list that is up. If a PBR policy's next hop goesdown, the policy uses another next hop if available. If no next hops are available, the device routes the
traffic in the normal way.
When you change route maps or ACL definitions, you must explicitly rebind the PBR policy to aninterface using the ip rebind-acl command.
If a PBR policy is applied globally, inbound ACLs, inbound ACL-based rate-limiting or TOS-based QoScannot be applied to any port on the router.
If an IPv4 option packet matches a deny ACL filter with the option keyword, the packet will beforwarded based on Layer 3 destination. If the ignore-options command is configured on the incoming
physical port, the packet will be forwarded based on its Layer 3 destination in hardware. Otherwise the
packet will be sent to the CPU for software forwarding.
If an IPv4 option packet matches a permit ACL filter with the option keyword, it is hardware-forwardedbased on its PBR next-hop (if available). If no PBR next-hop is available, the packet is either software or
hardware-forwarded (depending on whether ignore-options is configured), based on an IP forwardingdecision.
PBR) currently does not support the IPv4 and IPv6 features for changing the MTU. When the next hop is a GRE tunnel:
Packets that are larger than the tunnels MTU are subject to IP fragmentation and PBR processingof the fragmented packets.
For route changes of the tunnel destination, the appropriate information is automaticallypropagated to the PBR feature. Depending on the configuration of the route map, a route change
can change the active next hop of the PBR if it causes the active next hop to go down, which
triggers a new next hop selection process.
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
15/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 15 of 26
Basic Example of PBR
The following commands configure and apply a PBR policy that routes HTTP traffic received on virtual
routing interface 1 from the 10.10.10.x/24 network to 5.5.5.x/24 through next-hop IP address 1.1.1.1 or, if
1.1.1.x is unavailable, through 2.2.2.1.
NetIron(config)# access-list 101 permit tcp 10.10.10.0 0.0.0.255 eq http 5.5.5.0
NetIron(config)# route-map net10web permit 101NetIron(config-routemap net10web)# match ip address 101
NetIron(config-routemap net10web)# set ip next-hop 1.1.1.1
NetIron(config-routemap net10web)# set ip next-hop 2.2.2.1
NetIron(config-routemap net10web)# exit
NetIron(config)# vlan 10
NetIron(config-vlan-10)# tagged ethernet 1/1 to 1/4
NetIron(config-vlan-10)# router-interface ve 1
NetIron(config)# interface ve 1
NetIron(config-vif-1)# ip policy route-map net10web
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
16/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 16 of 26
APPENDIX C:SERVERIRON COMMAND LINE CONFIGURATION
Figure 4. Brocade ServerIrons in active-active configuration
There are two cache-groups in this configuration:
One for handling traffic that is entirely proxied by the CacheFlow appliance (that is, the source IP of thepacket going to the Internet belongs to the CacheFlow)
One for handling situations in which the client IP address is preserved as requests are forwarded to theInternet (client spoofing)
CacheFlow appliances CF1 and CF8 were reserved for handling client spoofing and subsequentlyasymmetrically routed traffic. This technique of clustering CacheFlow devices across sites is also known as
IP reflection. Cache-group 2 forwards traffic to CF1, although an HA active/standby pair of ServerIron
switches is also supported by this solution. Access list 101 is used to filter traffic that should go to cache-
group 1, and access list 102 is reserved for cache-group 2.
The architecture used in this solution supports an active-active HA design, which provides the ability for
both ADXs to support incoming traffic. This design allows the ADXs to support a greater number of
transactions, since transactions are shared between them. The ADXs share state information about the
connections to the CacheFlow appliances. If one ADX fails, the other ADX picks up the loadre-creating the
transaction from the failed device. If the maximum load is exceeded, ADX traffic is passed to the Internet
without caching.
Brocade ServerIronpair (active-active)
- Client requests sent to
ServerIrons by PBR on a
router in the PoP network
- Next hop for the PBR is a VRRP
accress on the ServerIron pair
PoP network
Servers
Internet
Access
Clients
Cache farm 1
(spoofing)
Server IP range:
10.98.1.x
RTR
RTR
Cache farm 2
Server IPs:
10.98.1.9
10.98.1.2
PRB (see
example
in App. B)Cache Server Group 1
Cache servers
CF2 - CF7
CF9 - CF11
with spoofing
Cache Server Group 2
Cache servers
CF1, CF8
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
17/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 17 of 26
ServerIron 1
global-stp
global-protocol-vlan
!
trunk server ethe 2/3 to 2/4
port-name "To_ISG1" ethernet 2/3trunk server ethe 3/3 to 3/4
port-name "To_ISG2" ethernet 3/3
!
session sync-update
!
server active-active-port ethe 3/11 vlan-id 200
!
!
server force-cache-rehash
!
server port 80
session-sync
tcp!
context default
!
server cache-name CF2 10.98.1.3
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF3 10.98.1.4
port httpport http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF4 10.98.1.5
port http
port http url "HEAD /~who-am-i-today~"
port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF5 10.98.1.6
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF6 10.98.1.7
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
18/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 18 of 26
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF7 10.98.1.8
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF9 10.98.1.10
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF10 10.98.1.11
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF11 10.98.1.12
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only!
!
server cache-name CF-VIP 10.98.1.100
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF1 10.98.1.2
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port sslport ssl l4-check-only
!
server cache-name CF8 10.98.1.9
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
19/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 19 of 26
!
server cache-group 1
hash-mask 255.255.255.255 0.0.0.255
filter-acl 101
cache-name CF2
cache-name CF3
cache-name CF4cache-name CF5
cache-name CF6
cache-name CF7
cache-name CF9
cache-name CF10
cache-name CF11
spoof-support
server cache-group 2
hash-mask 255.255.255.255 0.0.0.255
filter-acl 102
cache-name CF1
cache-name CF8
vlan 1 name DEFAULT-VLAN by port
!
vlan 10 by port
untagged ethe 2/5 to 2/6 ethe 3/5 to 3/6
router-interface ve 10
spanning-tree 802-1w
spanning-tree 802-1w priority 7000
!
vlan 100 by port
untagged ethe 2/3 to 2/4 ethe 3/3 to 3/4
router-interface ve 100spanning-tree 802-1w
spanning-tree 802-1w priority 7000
!
vlan 200 by port
untagged ethe 3/11
static-mac-address 0012.f2a7.bd4a ethernet 3/11
!
vlan 99 by port
untagged ethe 3/13
router-interface ve 99
spanning-tree 802-1w
spanning-tree 802-1w priority 7000
!aaa authentication web-server default local
aaa authentication enable default local
aaa authentication login default local
aaa authentication login privilege-mode
enable telnet authentication
enable aaa console
hostname SI-1
ip acl-permit-udp-1024
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
20/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 20 of 26
ip l4-policy 1 cache tcp http global
ip route 0.0.0.0 0.0.0.0 10.97.0.1
!
no telnet server
username admin password .....
router vrrp-extended
snmp-serversnmp-server community ..... ro
snmp-server host 10.9.71.90 .....
no web-management http
web-management https
!
interface ethernet 2/3
port-name To_ISG1
!
interface ethernet 2/5
port-name To_LS-top
link-aggregate configure key 10500
link-aggregate active
!
interface ethernet 2/6
link-aggregate configure key 10500
link-aggregate active
!
interface ethernet 3/1
port-name Mgmt
ip address 10.9.71.240 255.255.255.0
!
interface ethernet 3/3
port-name To_ISG2
!
interface ethernet 3/5port-name To_LS-bottom
link-aggregate configure key 11500
link-aggregate active
!
interface ethernet 3/6
link-aggregate configure key 11500
link-aggregate active
!
interface ethernet 3/13
disable
!
interface ethernet 3/14
disable!
interface ethernet 3/15
disable
!
interface ethernet 3/16
disable
!
interface ve 10
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
21/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 21 of 26
port-name To_CFs
ip address 10.98.1.254 255.255.255.0
ip vrrp-extended vrid 2
backup priority 150
ip-address 10.98.1.1
track-port e 2/3 priority 30
track-trunk-port e 2/3track-port e 3/3 priority 30
track-trunk-port e 3/3
enable
!
interface ve 99
port-name To_ISG2
ip address 10.99.0.252 255.255.255.0
ip vrrp-extended vrid 3
backup
ip-address 10.99.0.254
track-port e 2/5 priority 30
track-port e 3/5 priority 30
disable
!
interface ve 100
port-name To_ISGs
ip address 10.97.0.252 255.255.255.0
ip vrrp-extended vrid 1
backup priority 150
ip-address 10.97.0.254
track-port e 2/5 priority 30
track-port e 3/5 priority 30
enable
!
access-list 101 deny tcp 10.95.100.0 0.0.0.255 any eq httpaccess-list 101 deny tcp 10.98.100.0 0.0.0.255 any
access-list 101 permit tcp any any
!
access-list 102 permit tcp 10.95.100.0 0.0.0.255 any eq http
access-list 102 permit tcp 10.95.100.0 0.0.0.255 any eq ssl
!
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
22/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 22 of 26
ServerIron 2
global-stp
global-protocol-vlan
!
trunk server ethe 2/3 to 2/4
trunk server ethe 3/3 to 3/4port-name "To_ISG2" ethernet 3/3
!
session sync-update
!
server active-active-port ethe 3/11 vlan-id 200
!
!
server force-cache-rehash
server port 80
session-sync
tcp
!context default
!
server cache-name CF-VIP 10.98.1.100
port http
port http url "HEAD /~who-am-i-today~" port ssl
port ssl l4-check-only
!
server cache-name CF2 10.98.1.3
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port sslport ssl l4-check-only
!
server cache-name CF3 10.98.1.4
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF4 10.98.1.5
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF5 10.98.1.6
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
23/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 23 of 26
port ssl
port ssl l4-check-only
!
server cache-name CF6 10.98.1.7
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-onlyport ssl
port ssl l4-check-only
!
server cache-name CF7 10.98.1.8
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF9 10.98.1.10
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF10 10.98.1.11
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF11 10.98.1.12
port httpport http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
!
server cache-name CF1 10.98.1.2
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-name CF8 10.98.1.9
port http
port http url "HEAD /~who-am-i-today~" port http l4-check-only
port ssl
port ssl l4-check-only
!
server cache-group 1
hash-mask 255.255.255.255 0.0.0.255
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
24/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 24 of 26
filter-acl 101
cache-name CF2
cache-name cf3
cache-name CF4
cache-name CF5
cache-name CF6
cache-name CF7cache-name CF9
cache-name CF10
cache-name CF11
server cache-group 2
filter-acl 102
cache-name CF1
cache-name CF8
spoof-support
vlan 1 name DEFAULT-VLAN by port
!
vlan 100 by port
untagged ethe 2/3 to 2/4 ethe 3/3 to 3/4
router-interface ve 100
spanning-tree 802-1w
!
vlan 10 by port
untagged ethe 2/5 to 2/10 ethe 3/5 to 3/10
router-interface ve 10
spanning-tree 802-1w
!
vlan 200 by port
untagged ethe 3/11
static-mac-address 0012.f2a7.fa4a ethernet 3/11!
vlan 99 by port
untagged ethe 3/13
router-interface ve 99
spanning-tree 802-1w
spanning-tree 802-1w priority 7000
!
aaa authentication web-server default local
aaa authentication enable default local
aaa authentication login default local
aaa authentication login privilege-mode
enable telnet authenticationenable aaa console
hostname SI-2
ip acl-permit-udp-1024
ip l4-policy 1 cache tcp http global
ip route 0.0.0.0 0.0.0.0 10.97.0.1
!
no telnet server
username admin password .....
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
25/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
Scaling Blue Coat CacheFlow Using Brocade ServerIron in Service Provider Environments 25 of 26
router vrrp-extended
snmp-server
snmp-server community ..... ro
snmp-server host 10.9.71.90 .....
no web-management http
web-management https
!interface ethernet 2/5
port-name To_LS-top
link-aggregate configure key 10500
link-aggregate active
!
interface ethernet 2/6
link-aggregate configure key 10500
link-aggregate active
!
interface ethernet 3/1
port-name Mgmt
ip address 10.9.71.241 255.255.255.0
!
interface ethernet 3/3
port-name To_ISG2
!
interface ethernet 3/5
port-name To_LS-bottom
link-aggregate configure key 11500
link-aggregate active
!
interface ethernet 3/6
link-aggregate configure key 11500
link-aggregate active
!interface ethernet 3/13
disable
!
interface ethernet 3/14
disable
!
interface ethernet 3/15
disable
!
interface ethernet 3/16
disable
!
interface ve 10port-name To_SGs
ip address 10.98.1.253 255.255.255.0
ip vrrp-extended vrid 2
backup
ip-address 10.98.1.1
track-port e 2/3 priority 30
track-trunk-port e 2/3
track-port e 3/3 priority 30
7/27/2019 BrocadeSI BlueCoatCacheFlow BP GA BP 275 00
26/26
APPLICATION DELIVERY BEST PRACTICES GUIDE
track-trunk-port e 3/3
enable
!
interface ve 99
port-name To_ISG2
ip address 10.99.0.253 255.255.255.0
ip vrrp-extended vrid 3backup priority 150
ip-address 10.99.0.254
track-port e 2/5 priority 30
track-port e 3/5 priority 30
disable
!
interface ve 100
port-name To_ISGs
ip address 10.97.0.253 255.255.255.0
ip vrrp-extended vrid 1
backup
ip-address 10.97.0.254
track-port e 2/5 priority 30
track-port e 3/5 priority 30
enable
!
access-list 101 deny tcp 10.95.100.0 0.0.0.255 any eq http
access-list 101 deny tcp 10.98.100.0 0.0.0.255 any
access-list 101 permit tcp any any
!
access-list 102 permit tcp 10.95.100.0 0.0.0.255 any eq http
access-list 102 permit tcp 10.95.100.0 0.0.0.255 any eq ssl
!
2010 Brocade Communications Systems, Inc. All Rights Reserved. 04/10 GA-BP-275-00
Brocade, the B-wing symbol, BigIron, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, and TurboIron are
registered trademarks, and Brocade Assurance, DCFM, Extraordinary Networks, and Brocade NET Health are trademarks of
Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service
names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning
any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes
to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes
features that may not be currently available. Contact a Brocade sales office for information on feature and product availability.
Export of technical data contained in this document may require an export license from the United States government.