+ All Categories
Home > Documents > Fun with L3 VPNs aka, cutting VRFs until they bleed all over each other and give me a migraine Dave...

Fun with L3 VPNs aka, cutting VRFs until they bleed all over each other and give me a migraine Dave...

Date post: 14-Dec-2015
Category:
Upload: zane-bunce
View: 219 times
Download: 0 times
Share this document with a friend
Popular Tags:
25
Fun with L3 VPNs Fun with L3 VPNs aka, cutting VRFs until they bleed all over each other and give me a migraine Dave Diller Mid-Atlantic Crossroads 20 January, 2008
Transcript

Fun with L3 VPNsFun with L3 VPNsaka, cutting VRFs until they bleed all

over each other and give me a migraine

Dave DillerMid-Atlantic Crossroads

20 January, 2008

MAX VRFsMAX VRFs, , a very very verya very very very

brief history brief history (because you’ve heard it all (because you’ve heard it all

before)before) In the beginning, MAX had no VRFs, and that was OK.

Then Dan added a few, and started proselytizing.

Then, we added some more. And yea, verily, there was more proselytizing.

Now... we have NLR to add to the network. Guess what happens next?

Internet2 routes

MAX R&E customers

NGIX R&Es (ie NISN, DREN, NREN, ESNET, USGS)

inet.0 consists of:

Starting PointStarting Point

Desired GoalsDesired Goals

Keep inet.0 sacrosanct

MAX customers need access to one another

NLR gets its own VRF with NGIX R&Es

new BLEND VRF of I2 + NLR with NGIX R&Es

Routing ‘products’ enabledRouting ‘products’ enabled

Classic I2 R&E paths for I2-members

New NLR-only option for I2-non-members

Pre-mixed blend for those who primarily want R&E redundancy but don’t care about path

Roll-your-own for those who want granular control over the path on a per-destination basis

Crossleaking routesCrossleaking routes

auto export - aka magic

rib groups

Two ways to leak routes from one VRF to another:

LEAK-inet0 { import-rib [ inet.0 BLEND.inet.0 NLR.inet.0 ]; import-policy LEAK-I2;}

LimitationsLimitationsOne import policy to handle everything

NLR should get: NGIX R&Es and MAX customers

BLEND should get: I2 routes, *plus* the above.

Tried to match and set based on routing instance with “to instance BLAH” as policy, but it did not work.

By only being able to match and accept a route once into multiple places, I had to leak the same to all.

Means I2 routes end up in NLR VRF, and vice versa

SolutionSolution

How do we make the best of this nastiness?

Local-pref games

Community games

Basically, pref down upon crossleak, and don’t announce the interloper to customers

Example: BLEND.inet.0Example: BLEND.inet.0

Initial inet.0 localprefs:

BLEND duplicated that, except that:

customers leaked from inet.0 and NLR.inet. as 95

I2 and NLR both leaked in as 60

50 paths of last resort65 I2 backup70 I280 NGIX R&Es100 customers

Net results / RationalizationNet results / Rationalization

Why customers leaked as 95?

Prefer VRF-native route at 100 to leaked one at 95

Why I2/NLR as 60?

In the other VRFs, the interloper is now less preferred

50 paths of last resort60 leaked I2 and NLR routes80 NGIX R&Es95 leaked I2 and NLR customers100 native BLEND customers

With leaking in from NLR and inet.0, BLEND is now:

Resultant lprefs in inet.0Resultant lprefs in inet.0

With leaking in from NLR.inet.0, inet.0 is now:

50 paths of last resort60 leaked-in NLR routes65 I2 backup70 I280 NGIX R&Es95 leaked-in customer routes100 customers

With communities, inet.0 members never see NLR routes, yet they “do the right thing” in BLEND.inet.0

BenefitsBenefitsNLR.inet.0 looks exactly the same as inet.0, just with the players reversed

Each VRF has the same routes, just slanted differently

NLR-only routes available to those interested

I2-only routes available to existing members

Redundant and equal mix available as well

Sample leaked routeSample leaked routeThis is a route in inet.0 that originates as a static route in BLEND. Bits have been changed to protect the guilty:

277.275.262.0/24 (1 entry, 1 announced) *Static Preference: 5 Next-hop reference count: 10 Next hop: 206.196.177.327 via xe-7/3/0.213, selected State: <Secondary Active Int Ext> Age: 4w1d 3:23:37 Task: RT Announcement bits (6): 0-KRT 1-RT 4-LDP 6-BGP RT Background 7-Resolve tree 4 8-Resolve tree 5 AS path: I () Communities: 10886:1 10886:10101 Primary Routing Table BLEND.inet.0

View of same route from BLEND:

Secondary Tables: inet.0 NLR.inet.0

An interesting An interesting position...position...

I’ve now got three complete “R&E routing tables”, each slanted differently:

I2 primary, with NLR present but preffed down

NLR primary, with I2 present but preffed down

NLR and I2 equal, to let BGP do its thing

So, what can we see?

Interesting Interesting I2I2 route route statsstats

inet.0 has 10222 routes preferring the I2 peer (all routes not heard from customers or NGIX R&Es)

NLR.inet.0 has 2912 routes preferring the I2 peer (unique to I2 since preffed below everything else)

BLEND.inet.0 has 5626 routes preferring I2

As of this morning:

Interesting Interesting NLRNLR route route statsstats

NLR.inet.0 has 7761 routes preferring the NLR peer (all routes not heard from customers or NGIX R&Es)

inet.0 has 451 preferring the NLR peer (unique to NLR since preffed below everything else)

BLEND.inet.0 has 5057 routes preferring NLR

As of this morning:

Observations on statsObservations on statsBLEND is pretty darn well blended at this point

Initially (six months ago), much of ‘best path’ selection came all the way down to ‘oldest route’ in BLEND, so it was slanted a lot more to I2 as compared to the ‘new kid’ BGP session.

NLR has MANY fewer unique routes, but 2/3 of their total number are preferred when evenly mixed.

Are dual-connected networks doing TE to prefer NLR, or did normal route churn over the last few months even things out?

IPv6 and MulticastIPv6 and MulticastIPv6 posed no problem at all. Duplicated v4 rib groups and configs for v6. Worked like a charm for unicast.

I don’t have multicast working in the VRFs.

SAs some in and work fine. People in the same VRF can see each other fine.

Crossleaked doesn’t. Tree doesn’t seem to build right to cross the boundary.

Anyone have experience with L3 VPNs and multicast?

Multicast workaroundMulticast workaroundSince NLR routes are present in inet.0 (albeit preffed down), multicast enabled members can receive the routes and have theirs visible on NLR with the right communities applied.

Nowhere near as balanced as BLEND, but gets their routes on NLR for now.

Sub-optimal but functional for now.

Continued progressContinued progressToo many compromises, not as ‘clean’ a solution as I wanted.

inet.0 has NLR routes in it, so is not sacrosanct

too many reindeer games to get lpref working right

After this was implemented, worked with Juniper to find a better answer. Took quite a while to get anywhere but eventually we did.

SSecret sauceecret sauceTurns out there is the potential to match on a route multiple times inside the same policy, when importing it to multiple ribs, and its bloody obvious in retrospect:

match on “to rib” as part of the policy and have different actions based on destination routing table!

Tried “to instance” in the early experiments but it did not work, “to rib” never registered as a possibility.

Feel stupid, but even with escalations, it took one month (to the day) from opening the ticket for Juniper to propose this, so not obvious to them either. (trying to salve my pride here ;-)

Saucy exampleSaucy exampleAs I said, bloody flipping obvious, this works in the lab:

dave@RE1-lab-t640> show configuration policy-options policy-statement TEST-LEAK                   term 10 {    from community TEST;    to rib TEST.inet.0;    then {        local-preference 79;        accept;    }}term 15 {    from community TEST;    to rib TEST2.inet.0;    then {        local-preference 78; community set TEST-2;        accept;    }}term 20 {    then reject;}

In theory...In theory...

Which should allow me to do something like this, but I’ve not tested it yet, caveat emptor:

term SEND-I2-to-BLEND {    from {        protocol bgp;        community ABILENE;    }    to rib BLEND;    then accept;}term REJECT-I2-to-NLR {    from {        protocol bgp;        community ABILENE;    }    to rib NLR;    then reject;}

Next steps, aka v2.0Next steps, aka v2.0

Test “to rib” and be sure it does what it should, everywhere it should, giving me the granularity I initially wanted.

Figure out multicast and VRFs (implementing this policy will drop the preffed-down NLR routes from inet.0, so the multicast workaround will not function once things get cleaned up.

questions?questions?


Recommended