Skip to content

ACI and Multicast Scenarios

Estimated time to read: 42 minutes

  • Originally Written: February, 2024

Info

This post uses the Nexus as Code (NaC) project which makes it very easy to configure ACI fabrics through a YAML file. More details and examples can be found at https://developer.cisco.com/docs/nexus-as-code/#!aci-introduction

Overview

This post provides the Nexus as Code configuration for the following multicast scenarios in ACI.

Note

Only the minimum configuration is provided however additional examples will be provided where applicable. e.g. PIM source and destination filtering

Base Setup

In most scenarios there are two Ubuntu Linux VMs, one acting as a source and one as a receiver. Iperf is used to simulate the multicast traffic. You could also use a video player such as VLC to create the traffic.

The following commands are run on the VMs:

  • Source: iperf -c 239.1.1.10 -u -T 3 -t 1000 -i 1

    • -c 239.1.1.10: Iperf client mode. 239.1.1.10 is the multicast group
    • -u: Send the multicast stream as UDP instead of TCP (the default)
    • -T 3: Sets the IP Time to Live for outgoing multicast packets to 3. ACI decrements the TTL on ingress and egress leaf switches and therefore if you leave this flag out and use the default TTL of 1 the packets will be dropped.
    • -t 1000: Specify the transmission time in seconds
    • -i 1: Prints to the screen every second
  • Receiver: iperf -s -u -B 239.1.1.10 -i 1

    • -s: This option tells iperf to operate in server mode.
    • -u: Use UDP instead of TCP (the default)
    • -B 239.1.1.10: Bind to the 239.1.1.10 stream
    • -i 1: Prints to the screen every second

Note

If you have multiple interfaces as I do, you may need to add a new route to your VMs which sends the multicast traffic to the correct interface. In my case ens192 is for management traffic and ens224 is for the multicast stream. I create a route on the source and receivers such that all multicast traffic for the 239.1.1.10 subnet will use the ens224 interface.

1. L2 multicast: Source and receiver in the same VRF, bridge domain/subnet

This scenario has the following configuration:

  • One tenant, conmurph-01
  • One VRF, source
  • One bridge domain, 192.168.101.0_24
  • One EPG, 192.168.101.0_24

PIM is not enabled as this example is purely L2 multicast within the same subnet/VLAN

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: source

      bridge_domains:

        - name: 192.168.101.0_24
          vrf: source
          subnets:
            - ip: 192.168.101.254/24
          igmp_interface_policy: allow_igmp_v3
          igmp_snooping_policy: enable_igmp_snooping

      application_profiles:
        - name: network-segments
          managed: false
          endpoint_groups:
            - name: 192.168.101.0_24
              alias: multicast_source_and_receiver
              bridge_domain: 192.168.101.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01

      policies:
        igmp_interface_policies:
          - name: allow_igmp_v3
            version: v3
        igmp_snooping_policies:
          - name: enable_igmp_snooping
            admin_state: true
            querier: true

Verify

The source is sending traffic to the 239.1.1.10 multicast group

The receiver is receiving the multicast stream

On the receiver you can see the IGMP request to join the 239.1.1.10 group

The Ubuntu VM is configured to use IGMPv3 which is the reason the join request is sent to 224.0.0.22

On the ACI leaf switch you can see the IGMP report

Finally, the UDP stream to 239.1.1.10 is captured on the receiver

2. L2 and L3 multicast: Source and receiver in the same VRF but in different bridge domains/subnets

This scenario has the following configuration:

  • One tenant, conmurph-01
  • One VRF, source with PIM enabled
  • Two bridge domains, 192.168.101.0_24 and 192.168.102.0_24 with PIM enabled
  • Two EPGs, 192.168.101.0_24 and 192.168.102.0_24
  • One L3Out with PIM enabled

PIM is configured as this example is using L3 multicast i.e. source in one subnet, receiver in another

ACI supports external RPs (i.e. outside the ACI fabric) or can provide the RP functionality fabric wide (per VRF). When using ASM, even if all sources and receivers are inside the fabric you will still require an L3Out configuration. This is because all PIM enabled border leaf switches will become anycast RP members. One of the border leafs will become the stripe winner which means it becomes the designated forwarder for the multicast group.

Don't forget to configure a loopback on each border leaf of the L3Out as this is used to determine the stripe winner.

To view the stripe winner you can use the command vsh -c 'show ip pim internal stripe-winner 239.1.1.1 vrf conmurph-01:vrf-source' (update with your own group/tenant/VRF) from the ACI leaf.

Also, if configuring PIM with an L3Out using SVIs you will need to configure a redistribution route map with the source as an attached-host. The route map should match the subnet used between the ACI border leaf switches and the external router/peer. See the example config below.

Multicast in ACI vs NX-OS

There are a few differences in how multicast is forwarded in ACI vs a non-overlay network like NX-OS or IOS. For example, (S,G) entries are not programmed on all leaf switches in the fabric. For ASM it's only on leaf switches connected to the multicast source and border leaf switches (it is the other way around for SSM).

As a result, you may see some flows work and other flows not work if the RP is not configured. On switches where the (S,G) entry is present the multicast flow will work (example, source and receiver on the same leaf or on a border leaf that is stripe winner). Multicast will not work where the source and receiver are on different non-border leaf switches if the RP is not configured.

It is a requirement to have an RP configured for ASM multicast even though there are some cases where it may work with this config missing (misconfiguration).

ACI Contracts

ACI contracts permit communication between groups of EPGs. Think of them like an access control list which permits traffic between a source and destination subnet on a specific port.

ACI contracts apply only to unicast traffic. Multicast and broadcast traffic is not affected.

You may need a contract in this scenario if you need to send unicast traffic between the two EPGs. However as contracts don't apply to multicast, the IGMP and PIM join messages and the multicast stream to the 239.1.1.10 group will work without one.

For more details on contracts see the Cisco ACI Contract Guide White Paper

You may also need a contract in some cases in order to program routing information in the border leaf switches. For example when bridge domains are connected to a switch which is not a border leaf. As per the following link,

Please remember that BD subnets are not distributed via MP-BGP, which is only for external routes. A contract between an EPG in the BD and the L3Out is required. Once the contract is configured, APIC knows the L3Out needs to talk to someone in the BD and installs the BD subnet on the border leaf switches. Then the redistribution happens with the route map mentioned above. Users typically do not need to pay attention to these details because a contract is required anyway to allow the traffic.

ACI Fabric L3Out White Paper - 3. Advertise internal routes (BD subnets) to external devices

Nexus as Code

Note the commented out action, # action: deny in the contract configuration. You can comment out the permit action on previous line and uncomment the deny action to test connectivity. As previously mentioned ACI contracts do not apply to multicast traffic so even though the contract is denied you should still see the multicast stream from source to receiver.

Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: source
        pim:
          fabric_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map

      bridge_domains:

        - name: 192.168.101.0_24
          vrf: source
          subnets:
            - ip: 192.168.101.254/24
              public: true
          l3_multicast: true
          l3outs:
            - l3out-to-external-network

        - name: 192.168.102.0_24
          vrf: source
          subnets:
            - ip: 192.168.102.254/24
          l3_multicast: true

      application_profiles:
        - name: network-segments
          managed: false
          endpoint_groups:
            - name: 192.168.101.0_24
              alias: multicast_source
              bridge_domain: 192.168.101.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                providers:
                  - between-source-and-receiver

            - name: 192.168.102.0_24
              alias: multicast_receiver
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                consumers:
                  - between-source-and-receiver

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit
                  # action: deny

      l3outs:
        - name: l3out-to-external-network
          vrf: source
          domain: conmurph-01.vrf-01
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 101
                  router_id: 101.2.1.1
                  router_id_as_loopback: true
                - node_id: 102
                  router_id: 102.2.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 101
                      channel: hx-dev-01-fi-a
                      vlan: 30
                      svi: true
                      ip: 172.16.100.1/24

                    - node_id: 102
                      channel: hx-dev-01-fi-b
                      vlan: 30
                      svi: true
                      ip: 172.16.100.2/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver

              subnets:
                - prefix: 172.16.99.0/24 # this is an external receiver

      policies:
        multicast_route_maps: # used for scoping the RP e.g. I have one group going to one RP and another group going to a second RP
          - name: 239.1.1.10_route_map # using this so the 239.1.1.10 group shows up in "show ip igmp groups"
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # If you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                rp_ip: 172.16.239.10
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 #This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast

Verify

Source of 192.168.101.10 to group 239.1.1.10

Destination of 192.168.102.11 streaming group 239.1.1.10

ACI leaf switch showing the IGMP group

vlan188 is the receiver 192.168.102.11

The mroute table on the ACI fabric showing the incoming interface from the source and the outgoing interface as vlan188 which is the receiver

PIM is enabled on the BD

PIM is enabled on the VRF

The ACI fabric is configured as an RP

The L3Out has a loopback configured

PIM is enabled on the L3Out an a route profile is configured as the L3Out is using an SVI

3. External multicast: External source is reachable via L3Out in conmurph-01, receiver is in ACI tenant conmurph-01

This scenario has the following configuration:

  • One tenant, conmurph-01
  • One VRF, receiver with PIM enabled
  • One bridge domain, 192.168.102.0_24 with PIM enabled
  • One EPG, 192.168.102.0_24
  • One L3Out with PIM enabled and OSPF peering to an external router
  • One source, 172.16.99.10, connected to the external network
  • The RP is configured on the external router and statically configured on the ACI fabric

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: receiver
        pim:
          static_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map

      bridge_domains:

        - name: 192.168.102.0_24
          vrf: receiver
          subnets:
            - ip: 192.168.102.254/24
              public: true
          l3outs:
            - l3out-to-external-network
          l3_multicast: true

      application_profiles:
        - name: network-segments
          managed: false
          endpoint_groups:

            - name: 192.168.102.0_24
              alias: multicast_receiver
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                consumers:
                  - between-source-and-receiver

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

      l3outs:

        - name: l3out-to-external-network
          vrf: receiver
          domain: conmurph-01.vrf-01
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 101
                  router_id: 101.2.1.1
                  router_id_as_loopback: true
                - node_id: 102
                  router_id: 102.2.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 101
                      channel: hx-dev-01-fi-a
                      vlan: 30
                      svi: true
                      ip: 172.16.100.1/24

                    - node_id: 102
                      channel: hx-dev-01-fi-b
                      vlan: 30
                      svi: true
                      ip: 172.16.100.2/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver

              subnets:
                - prefix: 172.16.99.0/24

      policies:
        multicast_route_maps: # used for scoping the RP e.g. I have one group going to one RP and another group going to a second RP
          - name: 239.1.1.10_route_map # using this so the 239.1.1.10 group shows up in "show ip igmp groups"
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # If you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                rp_ip: 172.16.239.10
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 #This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast

Simplified CSR 8000v configuration

Configuration
!
hostname conmurph-csr-8kv-1

!
ip multicast-routing distributed
!
interface Loopback239
ip address 172.16.239.10 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0
!
interface GigabitEthernet1
ip address dhcp
negotiation auto
!
interface GigabitEthernet2
description towards aci fabric
ip address 172.16.100.105 255.255.255.0
ip pim sparse-mode
ip ospf network broadcast
ip ospf 1 area 0
negotiation auto
!
interface GigabitEthernet3
description towards external source
ip address 172.16.99.254 255.255.255.0
ip ospf 1 area 0
ip pim passive
negotiation auto
!
router ospf 1
router-id 103.2.1.1
!
ip pim rp-address 172.16.239.10

Verify

External source of 172.16.99.10 to group 239.1.1.10

Receiver, 192.168.102.11, in the ACI fabric streaming group 239.1.1.10

Interfaces and multicast routing table on the ACI fabric showing the incoming and outgoing interfaces have been setup

ACI fabric showing the IGMP groups

CSR multicast routing table showing the incoming and outgoing interfaces have been setup

ACI routing table for the receiver VRF showing the source and RP can be reached via OSPF

CSR routing table showing the receiver, 192.168.102.11, is reachable via OSPF in the ACI fabric

4. Inter-VRF multicast: External source with two receivers each in a different VRF

In this inter-VRF scenario a multicast stream from an external source is available to two VRFs in the ACI fabric. The first, receiver-1, contains the L3Out to the RP and source. The second receives the stream via the first VRF. The scenario has the following configuration:

  • One tenant, conmurph-01
  • Two VRFs, receiver-1 and receiver-2 with PIM enabled
  • Two bridge domains, 192.168.101.0_24 in VRF receiver-1 and 192.168.102.0_24 in VRF receiver-2. Both have PIM enabled
  • Two EPGs, 192.168.101.0_24 and 192.168.102.0_24 connected to the respective BDs
  • One L3Out with PIM enabled and OSPF peering to an external router
  • One source, 172.16.99.10, connected to the external network
  • The RP is configured on the external router and statically configured on the ACI fabric

This is pretty much the same configuration as the previous example with the addition of the PIM inter_vrf_policies under the receiver-2 VRF. The RP still resides outside the fabric. You could also leak the subnets between the two VRFs and external if required.

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: receiver-1
        pim:
          static_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map

      - name: receiver-2
        pim:
          static_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map
          inter_vrf_policies:
            - tenant: conmurph-01
              vrf: receiver-1
              multicast_route_map: 239.1.1.10_route_map

      bridge_domains:

        - name: 192.168.101.0_24
          vrf: receiver-1
          subnets:
            - ip: 192.168.101.254/24
              public: false
          l3outs:
            - l3out-to-external-network
          l3_multicast: true

        - name: 192.168.102.0_24
          vrf: receiver-2
          subnets:
            - ip: 192.168.102.254/24
          l3_multicast: true

      application_profiles:
        - name: network-segments
          managed: false
          endpoint_groups:

            - name: 192.168.101.0_24
              alias: multicast_receiver_1
              bridge_domain: 192.168.101.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                providers:
                  - between-source-and-receiver
                consumers:
                  - between-source-and-receiver

            - name: 192.168.102.0_24
              alias: multicast_receiver_2
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                providers:
                  - between-source-and-receiver
                consumers:
                  - between-source-and-receiver

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

      l3outs:

        - name: l3out-to-external-network
          vrf: receiver-1
          domain: conmurph-01.vrf-01
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 101
                  router_id: 101.2.1.1
                  router_id_as_loopback: true
                - node_id: 102
                  router_id: 102.2.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 101
                      channel: hx-dev-01-fi-a
                      vlan: 30
                      svi: true
                      ip: 172.16.100.1/24

                    - node_id: 102
                      channel: hx-dev-01-fi-b
                      vlan: 30
                      svi: true
                      ip: 172.16.100.2/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver

              subnets:
                - prefix: 172.16.99.0/24

      policies:
        multicast_route_maps: # used for scoping the RP e.g. I have one group going to one RP and another group going to a second RP
          - name: 239.1.1.10_route_map # using this so the 239.1.1.10 group shows up in "show ip igmp groups"
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # If you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # Already defined in static RP - Could be used with Auto RP and BSR
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 #This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast

Verify

External source of 172.16.99.10 to group 239.1.1.10

Receiver-1 capturing the stream

Receiver-2 (in the second VRF) capturing the stream

The IGMP groups showing receiver-1 and receiver-2 have joined from their respective VRFs/BDs

The mroute table for both VRFs showing the incoming and outgoing interfaces. Note the extranet receiver list indicating receiver-2 is in a different VRF

5. L3 multicast between tenants: External source connected to ACI via shared-services tenant. Receiver in a different tenant

This is very similar to the previous example but instead of two VRFs in a single tenant this is two VRFs in two different tenants. The source and RP are still external. Tenant conmurph-01 is acting as a "shared-services" tenant, providing access to the external network and multicast stream. Since there are no sources or receivers in this tenant the bridge domain and EPG configuration has been removed.

Tenant conmurph-02 contains a receiver. The NaC code below has the inter-vrf multicast configuration.

Note that any contracts which are provided/consumed between tenants must be configured with a global scope.

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: shared-services
        pim:
          static_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

      l3outs:

        - name: l3out-to-external-network
          vrf: shared-services
          domain: conmurph-01.vrf-01
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 101
                  router_id: 101.2.1.1
                  router_id_as_loopback: true
                - node_id: 102
                  router_id: 102.2.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 101
                      channel: hx-dev-01-fi-a
                      vlan: 30
                      svi: true
                      ip: 172.16.100.1/24

                    - node_id: 102
                      channel: hx-dev-01-fi-b
                      vlan: 30
                      svi: true
                      ip: 172.16.100.2/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver

              subnets:
                - prefix: 172.16.99.0/24

      policies:
        multicast_route_maps: # used for scoping the RP e.g. I have one group going to one RP and another group going to a second RP
          - name: 239.1.1.10_route_map # using this so the 239.1.1.10 group shows up in "show ip igmp groups"
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # If you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # Already defined in static RP - Could be used with Auto RP and BSR
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 #This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast

    - name: conmurph-02
      managed: false

      vrfs:

      - name: receiver-1
        pim:
          static_rps:
            - ip: 172.16.239.10
              multicast_route_map: 239.1.1.10_route_map
          inter_vrf_policies:
            - tenant: conmurph-01
              vrf: shared-services
              multicast_route_map: 239.1.1.10_route_map

      bridge_domains:

        - name: 192.168.102.0_24
          vrf: receiver-1
          subnets:
            - ip: 192.168.102.254/24
              public: false
          l3_multicast: true

      application_profiles:
        - name: network-segments
          managed: false
          endpoint_groups:

            - name: 192.168.102.0_24
              alias: multicast_receiver_1
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: hx-dev-01-vds-01
              contracts:
                providers:
                  - between-source-and-receiver
                consumers:
                  - between-source-and-receiver

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

      policies:
        multicast_route_maps: # used for scoping the RP e.g. I have one group going to one RP and another group going to a second RP
          - name: 239.1.1.10_route_map # using this so the 239.1.1.10 group shows up in "show ip igmp groups"
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # If you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # Already defined in static RP - Could be used with Auto RP and BSR
                action: permit

Verify

The IGMP groups showing the receiver has joined the group from tenant conmurph-02

The mroute table for both Tenants/VRFs showing the incoming and outgoing interfaces. Note the extranet receiver list indicating the receiver is in a different Tenant and VRF

6. Source Inside Receiver Outside: Internal source between tenants with one external receiver

This scenario reverses the previous ones. In this case, the source is internal i.e. attached to the ACI fabric, and there are two receivers, one attached to the ACI fabric and one reachable via an L3Out. The tenant conmurph-01 is acting as a "shared-services" tenant and providing access to the external network. It also has one receiver attached.

The tenant conmurph-02 contains the multicast source.

Note that any contracts which are provided/consumed between tenants must be configured with a global scope.

Inter-vrf guidelines

There are a few guidelines to be aware of when configuring inter-vrf multicast in ACI. You can find all details in the ACI multicast configuration guide as well as the Cisco Live presentations. Here is a summary.

  • The inter-vrf policy is configured under the receiver VRF (in our example this is the shared-services VRF). The following configuration is applied and this allows multicast receivers in one VRF to receive multicast traffic from sources in another VRF.

vrfs:
  - name: shared-services
    pim:
      static_rps:
        - ip: 172.16.239.10
          multicast_route_map: 239.1.1.10_route_map
      inter_vrf_policies:
        - tenant: conmurph-02
          vrf: source
          multicast_route_map: 239.1.1.10_route_map
- Multicast traffic is always forwarded in the source VRF across the fabric and the receiving switch is responsible for crossing VRFs. This means the source VRF needs to be present on all switches where receiver VRF is located. In some cases you may need to configure a "dummy" EPG/BD which is associated to the source VRF and attach it to the receiver switch e.g. via a static port binding. This ensures the source VRF will be available on the switch to which receivers are attached.

  • For inter-vrf multicast using any-source multicast, the RP must be in the same VRF as the source. As per the example configuration below, you can configure a fabric RP in the source VRF and then a static RP in the receiver VRF pointing to the fabric RP.

  • L3Outs are a requirement for routed multicast in ACI. When not using the ACI fabric as the RP, the border leaf switches provide a path to the external RP. In the case of fabric RP functionality, the border leaf switches take on the function of the RP. Therefore you must have an L3Out in the source VRF for the fabric RP configuration. In the example code below this is just a "dummy" L3Out which allow the fabric RP to be configured but doesn't actually connect to anything externally. The "real" connectivity to the external network and receiver is still via the L3Out configured in the shared-services tenant.

  • The shared-services VRF in tenant conmurph-01 needs to have a route to the fabric RP which exists in the source VRF in tenant conmurph-02. Since this IP (172.16.239.10) is not attached to a standard EPG or BD we can't leak the route using the leaked_internal_prefixes configuration option under the VRF. However, as previously mentioned contracts not only filter unicast traffic but also implement routing policy. In the example configuration below I have a global contract in common which is provided by the dummy L3Out external EPG (containing the fabric RP) in tenant conmurph-02 and consumed by the real L3out external EPG in tenant conmurph-01. When this contract is added, ACI will install a route to the fabric RP, 172.16.239.10 into the conmurph-01:shared-services VRF. This allows reachability from the shared-services VRF to the RP in the source VRF.

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: common
      managed: false
      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: used-to-leak-fabric-rp-to-shared-services-vrf
          scope: global
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

    - name: conmurph-01
      managed: false

      vrfs:
        - name: shared-services
          pim:
            static_rps:
              - ip: 172.16.239.10
                multicast_route_map: 239.1.1.10_route_map
            inter_vrf_policies:
              - tenant: conmurph-02
                vrf: source
                multicast_route_map: 239.1.1.10_route_map
          leaked_external_prefixes:
            - prefix: 172.16.99.0/24
              destinations:
                - tenant: conmurph-02
                  vrf: source

      bridge_domains:
        - name: 192.168.101.0_24
          vrf: shared-services
          subnets:
            - ip: 192.168.101.254/24
          l3_multicast: true

      application_profiles:
        - name: multicast-network-segments
          managed: true
          endpoint_groups:

            - name: 192.168.101.0_24
              alias: multicast_receiver_inside
              bridge_domain: 192.168.101.0_24
              vmware_vmm_domains:
                - name: mil_1_pod_1_vmm
              contracts:
                providers:
                  - between-source-and-receiver

      l3outs:

        - name: l3out-to-external-network
          vrf: shared-services
          domain: vmm_l3dom
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 1101
                  router_id: 101.239.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 1101
                      port: 3
                      vlan: 1395
                      svi: true
                      ip: 172.16.100.13/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver
                consumers:
                  - used-to-leak-fabric-rp-to-shared-services-vrf

              subnets:
                - prefix: 172.16.99.0/24

      policies:
        multicast_route_maps:
          - name: 239.1.1.10_route_map
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # Use this if you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # The RP is already defined in the fabric RP - This field could be used with Auto RP and BSR
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 # This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast
            mtu_ignore: true

    - name: conmurph-02
      managed: false

      vrfs:
        - name: source
          pim:
            fabric_rps:
              - ip: 172.16.239.10
                multicast_route_map: 239.1.1.10_route_map
          leaked_internal_prefixes:
            - prefix: 192.168.102.0/24
              destinations:
                - tenant: conmurph-01
                  vrf: shared-services
                  public: true

      bridge_domains:
        - name: 192.168.102.0_24
          vrf: source
          subnets:
            - ip: 192.168.102.254/24
          l3_multicast: true

      application_profiles:
        - name: multicast-network-segments
          managed: true
          endpoint_groups:

            - name: 192.168.102.0_24
              alias: multicast_source
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: mil_1_pod_1_vmm

      l3outs:

        - name: dummy-l3out # This is just to enable the fabric RP configuration but does not connect to any external device
          vrf: source
          domain: vmm_l3dom
          l3_multicast_ipv4: true

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 1101
                  router_id: 101.239.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles

                  interfaces:
                    - node_id: 1101
                      port: 3
                      vlan: 1395
                      svi: true
                      ip: 172.16.100.13/24

          external_endpoint_groups:
            - name: fabric_rp
              subnets:
                - prefix: 172.16.239.10/32
                  shared_route_control: true
              contracts:
                providers:
                  - used-to-leak-fabric-rp-to-shared-services-vrf

      policies:
        multicast_route_maps:
          - name: 239.1.1.10_route_map
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # Use this if you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # The RP is already defined in the fabric RP - This field could be used with Auto RP and BSR
                action: permit

Simplified CSR 8000v configuration

Configuration
!
hostname conmurph-csr-8kv-1

!
ip multicast-routing distributed
!
interface GigabitEthernet2
description towards aci fabric
ip address 172.16.100.105 255.255.255.0
ip pim sparse-mode
ip ospf network broadcast
ip ospf 1 area 0
negotiation auto
!
interface GigabitEthernet3
description towards external source
ip address 172.16.99.254 255.255.255.0
ip pim sparse-mode
ip ospf 1 area 0
negotiation auto
!
router ospf 1
router-id 104.239.1.1
!
ip pim rp-address 172.16.239.10
!
ip route 172.16.239.10 255.255.255.255 172.16.100.13

Verify

Internal source of 192.168.102.11 to group 239.1.1.10

Internal receiver of 192.168.101.11 in tenant conmurph-01 receiving stream from group 239.1.1.10

External receiver of 172.16.99.10 receiving stream from group 239.1.1.10

The IGMP groups on the ACI fabric showing the internal receiver

The IGMP groups on the external network showing the external receiver

The external router OSPF and PIM connections

The ACI mroute tables showing the two VRFs and source/receivers. Note the outgoing tunnel38 interface from the source VRF is the incoming interface in the shared-services VRF

The ACI interfaces showing the source, fabric RP, internal receiver, and external connectivity (172.16.100.13)

The external router mroute table

Note the fabric RP membership as part of the source VRF

If the "dummy" L3Out is not configured in the source VRF the fabric RP won't be deployed. Note the BL (border leaf) count: 0

Here you see there are no fabric RP members as no L3Out has been configured.

When configured correctly you should see a border leaf and fabric RP member.

7. ACI Multi-pod: multicast between pods and source/receivers in different tenants

This scenario provides a multicast example configuration when using an ACI multi-pod environment. There are two pods, pod-1 and pod-6. The source lives in tenant conmurph-02 and is connected to pod-6 on leaf-1603 while the receiver is in tenant conmurph-01 and connected to pod-1 on leaf-1101. Similar to the previous examples there is also an external receiver connected via the L3Out in tenant conmurph-01.

The configuration is very similar to the previous scenario however note that the "dummy" L3Out has an interface configured in pod-6. Additionally, since the source only resides in pod-6 I had to create "dummy" BD and EPG in the source VRF and attached to the pod-1 VMM domain to ensure the source VRF existed on the receiver switches in pod-1

  - name: dummy_epg_to_enable_vrf_on_receiver_switch
    bridge_domain: dummy_bd_to_enable_vrf_on_receiver_switch
    vmware_vmm_domains:
      - name: mil_1_pod_1_vmm

Note in one of the screenshots that when this is deployed it may take 1 minute for the source VRF mroute table to initialize on the receiver switch.

Nexus as Code

Configuration
---
apic:
  tenants:
    - name: common
      managed: false
      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: used-to-leak-fabric-rp-to-shared-services-vrf
          scope: global
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

    - name: conmurph-01
      managed: false

      vrfs:
        - name: shared-services
          pim:
            static_rps:
              - ip: 172.16.239.10
                multicast_route_map: 239.1.1.10_route_map
            inter_vrf_policies:
              - tenant: conmurph-02
                vrf: source
                multicast_route_map: 239.1.1.10_route_map
          leaked_external_prefixes:
            - prefix: 172.16.99.0/24
              destinations:
                - tenant: conmurph-02
                  vrf: source
          leaked_internal_prefixes:
            - prefix: 192.168.101.0/24
              destinations:
                - tenant: conmurph-02
                  vrf: source

      bridge_domains:
        - name: 192.168.101.0_24
          vrf: shared-services
          subnets:
            - ip: 192.168.101.254/24
              public: true
          l3_multicast: true
          l3outs:
            - l3out-to-external-network

      application_profiles:
        - name: multicast-network-segments
          managed: true
          endpoint_groups:

            - name: 192.168.101.0_24
              alias: multicast_receiver
              bridge_domain: 192.168.101.0_24
              vmware_vmm_domains:
                - name: mil_1_pod_1_vmm

      filters:
        - name: src-any-to-any-dst
          entries:
            - name: src-any-to-any-dst
              ethertype: unspecified

      contracts:
        - name: between-source-and-receiver
          scope: tenant
          subjects:
            - name: all-traffic
              filters:
                - filter: src-any-to-any-dst
                  action: permit

      l3outs:

        - name: l3out-to-external-network
          vrf: shared-services
          domain: vmm_l3dom
          ospf:
            area: 0
            area_type: regular
          l3_multicast_ipv4: true
          redistribution_route_maps: # this is required when using an L3out SVI with multicast
            - source: attached-host
              route_map: pim_svi

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 1101
                  router_id: 101.239.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: l3out-to-external-network-int-profiles
                  ospf:
                    policy: l3out-to-external-network-ospf-pol

                  interfaces:
                    - node_id: 1101
                      port: 3
                      vlan: 1395
                      svi: true
                      ip: 172.16.100.13/24

          external_endpoint_groups:
            - name: all-ext-subnets
              contracts:
                providers:
                  - between-source-and-receiver
                consumers:
                  - between-source-and-receiver
                  - used-to-leak-fabric-rp-to-shared-services-vrf

              subnets:
                - prefix: 172.16.99.0/24

      policies:
        multicast_route_maps:
          - name: 239.1.1.10_route_map
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # Use this if you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # The RP is already defined in the fabric RP - This field could be used with Auto RP and BSR
                action: permit
        route_control_route_maps:
          - name: pim_svi # This is required when using multicast with an L3Out SVI.
            contexts:
              - name: svi_subnet
                action: permit
                order: 1
                match_rules:
                  - 172.16.100.0_24
        match_rules:
          - name: 172.16.100.0_24 # This is the subnet used between my ACI leaf and router
            prefixes:
              - ip: 172.16.100.0/24
                aggregate: true
        ospf_interface_policies:
          - name: l3out-to-external-network-ospf-pol
            network_type: bcast
            mtu_ignore: true

    - name: conmurph-02
      managed: false

      vrfs:
        - name: source
          pim:
            fabric_rps:
              - ip: 172.16.239.10
                multicast_route_map: 239.1.1.10_route_map
          leaked_internal_prefixes:
            - prefix: 192.168.102.0/24
              destinations:
                - tenant: conmurph-01
                  vrf: shared-services
                  public: true

      l3outs:

        - name: dummy-l3out # This is just to enable the fabric RP configuration but does not connect to any external device
          vrf: source
          domain: vmm_l3dom
          l3_multicast_ipv4: true

          node_profiles:
            - name: border-leafs
              nodes:
                - pod_id: 6
                  node_id: 1603
                  router_id: 106.239.1.1
                  router_id_as_loopback: true

              interface_profiles:
                - name: dummy-l3out

                  interfaces:
                    - pod_id: 6
                      node_id: 1603
                      port: 1
                      vlan: 1396
                      svi: true
                      ip: 172.16.100.63/24

          external_endpoint_groups:
            - name: fabric_rp
              subnets:
                - prefix: 172.16.239.10/32
                  shared_route_control: true
              contracts:
                providers:
                  - used-to-leak-fabric-rp-to-shared-services-vrf

      bridge_domains:
        - name: 192.168.102.0_24
          vrf: source
          subnets:
            - ip: 192.168.102.254/24
          l3_multicast: true

        - name: dummy_bd_to_enable_vrf_on_receiver_switch
          vrf: source
          l3_multicast: true


      application_profiles:
        - name: multicast-network-segments
          managed: true
          endpoint_groups:

            - name: 192.168.102.0_24
              alias: multicast_source
              bridge_domain: 192.168.102.0_24
              vmware_vmm_domains:
                - name: mil_2_pod_2_vmm

            - name: dummy_epg_to_enable_vrf_on_receiver_switch
              bridge_domain: dummy_bd_to_enable_vrf_on_receiver_switch
              vmware_vmm_domains:
                - name: mil_1_pod_1_vmm

      policies:
        multicast_route_maps:
          - name: 239.1.1.10_route_map
            entries:
              - order: 1
                #source_ip: 172.16.99.0/24 # Use this if you're using the routemap for different use cases e.g. filtering multicast
                group_ip: 239.1.1.10/8
                #rp_ip: 172.16.239.10 # The RP is already defined in the fabric RP - This field could be used with Auto RP and BSR
                action: permit

Verify

Output showing leaf-1101 in pod-1 and leaf-1603 in pod-6

No receivers are connected to pod-6

The internal receiver, 192.168.101.11, is connected to pod-1

pod-1 mroute table

pod-6 mroute table

pod-1 shared-services VRF showing a route to the the fabric RP, 172.16.239.10, in the source VRF

Output of the mroute table in pod-1 showing the initialization

Multicast flow from source to receivers in different pods and tenants

Resources

Comments