This is a video that explains high level about EVPN Multisite. There is no really config involved. The pdf for the session “BRKDCN-2913” is easy to find and download. Although this is NXOS based, Arista has similar feature called “EVPN Gateway”:  https://www.arista.com/en/support/toi/eos-4-25-0f/14591-evpn-l3-gateway (needs registration….) Just one line really to add under the EVPN address family to change the next hop to the gateway’s address. The implementation looks much more simpler than NXOS….

This is a summary of the video:

RFC9014 … DCI EVPN Overlay defines the Layer-2 extension between two domains

section 3: decoupled gw. vland handoff with a WAN edge.
section 4: integrated gw: gw talk directly L2EVPN
multi-site (BESS version) draft-sharma-bess-multi-site-evpn. support extension of l2 and l3, uc and mc, vpns. BGW talk ebgp evpn AF.
gw mode: anycast vip (ecmp: underlay) or multipath vip (ecmp: under and overlay)
type5: re-originated.
RD: separate RD for vIP and PIP
RT: same for intra/inter dc
Border GW = EVPN GW

EVPN-IPVPN interop defines the Layer-3 extension between domains, currently lacks of EVPN to EVPN interconnects

Multisite draft combines RFC9014 and EVPN-IPVPN with EVPN to EVPN connection: https://datatracker.ietf.org/doc/html/draft-sharma-bess-multi-site-evpn-02

Use cases:
1- Compartmentalization:

  • multiple fabrics, single DC
  • control at BGW: allow extension l2,l3. Reduces remote VTEP count. Expands VTEP scale.
  • BUM packet: LS replicated only in the fabric, then BGW to the BGW in the other fabric. In no multi-site, LS replicate to ALL VTEP in the fabric.

2- Scale

  • control at BGW: Reduces remote VTEP count. Expands VTEP scale.
  • scale thhrough hierarchy: multiply vtep with sites
    up to 128 sites per multi-site domain. Up to 256 VTEP per fabric -> 32768 VTEPs

3- DC interconnect (DCI)

  • IP reachability and MTU.
    integration with legacy networks:
    hybrid cloud connectivity: extends l3 with vrf awareness.

Deeper look:
HW support only important in BGW. LS is not important.


  • stitched at BGW (no recirculation, hw rate)
  • intra fabric tunnel goes LS to LS or LS to BFW
  • inter fabric tunnel goes BGW to BGW
  • only BGW IP must be unique.. Fabrics are “separated”.

BGW deployment considerations:

  • 1) anycast bgw
  • – up to 6 nodes. They are not interconnected, just share ASN nothing else.. In LS or SS
  • – VIP mode: vip for tunnel stitching. foucs on scale and convergence. overlay ecpm
  • – PIP mode: for 3rd party interop. Uses PIP for tunnel stitching. Uses under and overlay Ecmp.

  • 2) vpc bgw:
  • – only 2 (because vpc, peer link). Only in LS
    – legacy network integration, attachment of fw and adcs.

NOTE: anycast and vpc must have a multi-site vip and PIP. only vpc needs an extra IP for VPC IP.
PIP needed for establishing BGP and for Designated Forwarding election (only one BGW forwards per vlan.

CP and DP:

  • As eBGP uses betweem multi-sites -> ebgp changes NH => vxlan tunnel termination and re-origination + loop prevention (as-path). Full mesh ebgp evpn between sites.
  • underlay/overlay CP deployemnt: recommended IEI (recommended) within fabric: IGP as underlay, iBGP as overlay.
  • full mesh ebgp evpn between site OR deploy RS (route-server) -> RS is in a separate AS and only does CP = eBGP RR (RFC 7947): evpn routes reflection, NH unchanged, RT rewrite!

I think this is the white paper mentioned:  https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-739942.html

Another thing, I wish it wouldnt be that painful to simulate NXOS. It is so easy spin up a lab with cEOS…..in a standard laptop..