Skip to content

ACI Integration with FMC Endpoint Update App

Estimated time to read: 17 minutes

  • Originally Written: May, 2024

Info

This post uses the Nexus as Code (NaC) project which makes it very easy to configure ACI fabrics through a YAML file . More details and examples can be found at https://developer.cisco.com/docs/nexus-as-code/#!aci-introduction

Overview

Although ACI can send traffic to any vendors firewalls, if you also have Firepower Management Center with FTD or ASA you can integrate ACI with FMC/ASA. This integration automatically pushes Endpoint Groups (EPGs) and Endpoint Security Groups (ESGs) created in an ACI tenant to FMC as dynamic objects which can then be used in access control rules. The IP addresses are also tracked as part of the dynamic objects. In the example shown below, I have three subnets, 192.168.10.0/24, 192.168.20.0/24, and 192.168.30.0/24.

  • 192.168.10.0/24 and 192.168.20.0/24 are classified in the production ESG
  • 192.168.30.0/24 is part of the development group

I want to have a rule which permits production workloads communicating with each other but blocks development workloads to production workloads.

Although I have subnets configured in the ACI fabric, in FMC I only have to select the production and development objects to build the rule. I don't care which subnets/IPs belong to which group. The IPs are also updated dynamically if an additional subnets or endpoint is added to a group.

ACI Service Graph with Policy Based Redirect

A contract in ACI is similar to an Access Control List (ACL). It is attached to a source and destination group and permits or denies all traffic or a subset. Traffic can be redirected to a firewall (or load balancer) using a concept called a service graph, without the need for the firewall to be the default gateway for the servers.

The service graph is attached to a specific contract. As shown in the following image, traffic from the 10.1.4.0 group to the production group will first be sent to a firewall.

The second image is an intra ESG contract. When one endpoint in the production group talks to another endpoint in the same production group, the traffic will first be redirected to the firewall.

For the purposes of this post, the contracts send all traffic to the firewall, however you could selectively send just some traffic (e.g. http), while the rest goes direct to the destination (i.e. bypasses the firewall).

I'm using the routed firewall with L3Out PBR in one arm mode to send traffic to the firewall as it's a fairly simple setup. Other designs such as two arm could also be used.

Micro-segmentation (PVLAN) is used to send traffic to the leaf switch for policy lookup, rather than just switching on the host. Intra-ESG contracts are used to send any communication between endpoints in an ESG to the firewall.

Multiple contracts and naming considerations

My colleague Steve has taught me a lot about ACI configs but probably the most valuable lesson (which I still keep forgetting) is to use descriptive names and not try to fit all scenarios in a single object.

For example, I previously had a single contract called to-firewall which redirected traffic to a firewall (as the name suggests). I used this for all ESGs, including as the intra-esg contract.

The name was reasonably descriptive however it could be improved and there was also an issue of using a single contract everywhere.

To improve the configuration I've separated a single contract into four individual contracts with more descriptive names.

  • permit-to-esg-production
  • permit-to-esg-development
  • permit-intra-esg-production
  • permit-intra-esg-development

As the names suggest, each ESG has a contract that permits traffic to that ESG and an intra-esg contract which permits traffic between endpoints within the same ESG.

This provides two key benefits:

  1. It's easier to understand what this policy is trying to achieve. This is helpful to not only remember or learn how an environment was configured but also more easily troubleshoot when a problem exists
  2. Changes can be made to an environment without affecting the other environments. e.g. I might have to change the development contract and I don't want it to affect any traffic going to the production ESG
Nexus as Code Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: vrf-01

      bridge_domains:
        - name: 192.168.10.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.10.254/24

        - name: 192.168.20.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.20.254/24

        - name: 192.168.30.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.30.254/24

        - name: 192.168.40.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.40.254/24

        - name: 6.6.6.0_24
          alias: pbr_bd
          vrf: vrf-01
          subnets:
            - ip: 6.6.6.1/24

      application_profiles:
        - name: network-segments
          endpoint_groups:
            - name: 192.168.10.0_24
              bridge_domain: 192.168.10.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true # using microsegmentation to send traffic to the leaf switch
                  resolution_immediacy: immediate

            - name: 192.168.20.0_24
              bridge_domain: 192.168.20.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true # using microsegmentation to send traffic to the leaf switch
                  resolution_immediacy: immediate

            - name: 192.168.30.0_24
              bridge_domain: 192.168.30.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true # using microsegmentation to send traffic to the leaf switch
                  resolution_immediacy: immediate

            - name: 6.6.6.0_24
              alias: pbr_bd
              bridge_domain: 6.6.6.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm

          endpoint_security_groups:
            - name: production
              vrf: vrf-01
              epg_selectors:
                - endpoint_group: 192.168.10.0_24
                - endpoint_group: 192.168.20.0_24
              # We don't need intra-esg isolation as the intra-esg contract will send all traffic to the firewall
              intra_esg_isolation: false
              contracts:
                intra_esgs:
                  - permit-intra-esg-production
                providers:
                  - permit-to-esg-production

            - name: development
              vrf: vrf-01
              epg_selectors:
                - endpoint_group: 192.168.30.0_24
              # We don't need intra-esg isolation as the intra-esg contract will send all traffic to the firewall
              intra_esg_isolation: false
              contracts:
                intra_esgs:
                  - permit-intra-esg-development
                providers:
                  - permit-to-esg-development

      filters:
        - name: src-any-to-dst
          entries:
            - name: src-any-to-dst
              ethertype: unspecified

      contracts:
        - name: permit-to-esg-production
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-to-esg-development
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-intra-esg-production
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-intra-esg-development
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

      services:
        service_graph_templates:
          - name: conmurph-ftdv-routed-1
            template_type: FW_ROUTED
            redirect: true
            device:
              tenant: conmurph-01
              name: conmurph-ftdv-routed-1

        l4l7_devices:
          - name: conmurph-ftdv-routed-1
            context_aware: single-Context
            type: VIRTUAL
            vmware_vmm_domain: mil_3_pod_1_vmm
            function: GoTo
            managed: false
            service_type: FW
            concrete_devices:
              - name: conmurph-ftdv-routed-1
                vcenter_name: mil_vcenter
                vm_name: conmurph-ftd-1
                interfaces:
                  - name: client
                    vnic_name: Network adapter 3 # network adapter on the VM which is used for PBR
            logical_interfaces:
              - name: client
                concrete_interfaces:
                  - device: conmurph-ftdv-routed-1
                    interface_name: client

        redirect_policies:
          - name: client
            l3_destinations:
              - ip: 6.6.6.2
                mac: 00:50:56:b6:f3:02 # MAC address of the network adapter 3 from above

        device_selection_policies:
          - contract: any
            service_graph_template: conmurph-ftdv-routed-1

            consumer:
              l3_destination: true
              redirect_policy:
                name: client
              logical_interface: client
              bridge_domain:
                name: 6.6.6.0_24

            provider:
              l3_destination: true
              redirect_policy:
                name: client
              logical_interface: client
              bridge_domain:
                name: 6.6.6.0_24

ACI Endpoint Update App

You can run the ACI Endpoint Update app on the APIC itself however I had some connection timeout issues in my lab when connecting from the app to the FMC.

The general setup instructions can be found here.

https://www.cisco.com/c/en/us/td/docs/security/firepower/APIC/apps/EPU/quick-start/guide/110/fmc-epu-app-aci-qsg-1-1/aci-epu-app-fmc-qsg-101_chapter_00.html

I used the standalone ACI Endpoint Update app instead of the APIC hosted app. You can find the instructions here.

https://github.com/aci-integration/ACI-Endpoint-Update-App

I ran this on my laptop as it was just for a demo, however in production you would want to host it on more permanent, highly available, secure host.

You'll need Docker installed to run the container and connectivity to both APIC and FMC from wherever you run the app.

I had some issues with cloning the Git repo (large file) so had to manually download the install_aci_app_3.0.tgz. Once it was extracted I ran ./install_aci_app_3.0.sh -t to run in test mode.

If you're running it on your laptop you can then go to http://localhost:8000 in a browser to access the app interface.

First add your APIC site.

Then add the FMC.

If everything works you should see the state is enabled and a successful connection and push in the Audit Log tab. The default sync/push time is every 60 seconds but this can be changed.

It's then time to create some new rules in FMC using the dynamic objects.

FMC Configuration

For this demo setup I have a virtual FTD with a single interface and a default route pointed back to my ACI fabric.

You can find the newly created dynamic objects on the Objects -> Object Management -> Network page.

I created a new rule to block traffic from the development ESG to the production ESG. Note that there are no subnets or IPs selected, these are automatically tracked and updated by FMC so you only need to worry about the objects (production/development).

Once the rule was saved and deployed to the FTD I started to see the allowed and blocked traffic show up in the event log.

Again, if I wanted to add a new subnet (e.g. 192.168.40.0/24 to the development ESG) I would simply have to add this in the ESG config in ACI. The endpoints would automatically be updated in FMC and no changes would be required to the access rule.

What about L3Outs?

An L3Out in ACI is just a peering to an external device. It uses the concept of an External EPG to classify traffic similar to how an endpoint is classified into a standard endpoint group.

The External EPGs are also pushed to the FMC via the endpoint update app.

In the following example, an external EPG classifies traffic from 10.1.3.0/24 and redirects it to the firewall. We can then apply the required policy such as logging or inspection.

Dropping at ingress

Traffic that is not classified into an External EPG will be dropped. In this case if the 10.1.3.0/24 prefix was to be removed, traffic coming from this endpoint into the ACI fabric would be dropped at ingress to the fabric without ever making it to the firewall.

ACI Nexus as Code - Appended with L3Out

The following config extends the previous one with L3Out configuration. In this example the bridge domain subnets are attached to the l3outs and advertised externally. The external EPGs classify the prefixes and permit communication with the production and development ESGs through contracts. These contracts are attached to a service graph which redirects traffic to the firewall.

Nexus as Code Configuration
---
apic:
  tenants:
    - name: conmurph-01
      managed: false

      vrfs:
      - name: vrf-01

      bridge_domains:
        - name: 192.168.10.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.10.254/24
              public: true
          l3outs:
            - floating-l3out-to-csr


        - name: 192.168.20.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.20.254/24
              public: true
          l3outs:
            - floating-l3out-to-csr

        - name: 192.168.30.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.30.254/24
              public: true
          l3outs:
            - floating-l3out-to-csr

        - name: 192.168.40.0_24
          vrf: vrf-01
          subnets:
            - ip: 192.168.40.254/24
              public: true
          l3outs:
            - floating-l3out-to-csr

        - name: 6.6.6.0_24
          alias: pbr_bd
          vrf: vrf-01
          subnets:
            - ip: 6.6.6.1/24

      application_profiles:
        - name: network-segments
          endpoint_groups:
            - name: 192.168.10.0_24
              bridge_domain: 192.168.10.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true
                  resolution_immediacy: immediate

            - name: 192.168.20.0_24
              bridge_domain: 192.168.20.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true
                  resolution_immediacy: immediate

            - name: 192.168.30.0_24
              bridge_domain: 192.168.30.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm
                  u_segmentation: true
                  resolution_immediacy: immediate

            - name: 6.6.6.0_24
              alias: pbr_bd
              bridge_domain: 6.6.6.0_24
              vmware_vmm_domains:
                - name: mil_3_pod_1_vmm

          endpoint_security_groups:
            - name: production
              vrf: vrf-01
              epg_selectors:
                - endpoint_group: 192.168.10.0_24
                - endpoint_group: 192.168.20.0_24
              # We don't need intra-esg isolation as the intra-esg contract will send all traffic to the firewall
              intra_esg_isolation: false
              contracts:
                intra_esgs:
                  - permit-intra-esg-production
                providers:
                  - permit-to-esg-production

            - name: development
              vrf: vrf-01
              epg_selectors:
                - endpoint_group: 192.168.30.0_24
              # We don't need intra-esg isolation as the intra-esg contract will send all traffic to the firewall
              intra_esg_isolation: false
              contracts:
                intra_esgs:
                  - permit-intra-esg-development
                providers:
                  - permit-to-esg-development

      filters:
        - name: src-any-to-dst
          entries:
            - name: src-any-to-dst
              ethertype: unspecified

      contracts:

        - name: permit-to-esg-production
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-to-esg-development
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-intra-esg-production
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

        - name: permit-intra-esg-development
          subjects:
            - name: permit-any
              filters:
                - filter: src-any-to-dst
              service_graph: conmurph-ftdv-routed-1

      services:
        service_graph_templates:
          - name: conmurph-ftdv-routed-1
            template_type: FW_ROUTED
            redirect: true
            device:
              tenant: conmurph-01
              name: conmurph-ftdv-routed-1

        l4l7_devices:
          - name: conmurph-ftdv-routed-1
            context_aware: single-Context
            type: VIRTUAL
            vmware_vmm_domain: mil_3_pod_1_vmm
            function: GoTo
            managed: false
            service_type: FW
            concrete_devices:
              - name: conmurph-ftdv-routed-1
                vcenter_name: mil_vcenter
                vm_name: conmurph-ftd-1
                interfaces:
                  - name: client
                    vnic_name: Network adapter 3 # network adapter on the VM which is used for PBR
            logical_interfaces:
              - name: client
                concrete_interfaces:
                  - device: conmurph-ftdv-routed-1
                    interface_name: client


        redirect_policies:
          - name: client
            l3_destinations:
              - ip: 6.6.6.2
                mac: 00:50:56:b6:f3:02 # MAC address of the network adapter 3 from above

        device_selection_policies:
          - contract: any
            service_graph_template: conmurph-ftdv-routed-1

            consumer:
              l3_destination: true
              redirect_policy:
                name: client
              logical_interface: client
              bridge_domain:
                name: 6.6.6.0_24

            provider:
              l3_destination: true
              redirect_policy:
                name: client
              logical_interface: client
              bridge_domain:
                name: 6.6.6.0_24
      l3outs:

        - name: floating-l3out-to-csr
          vrf: vrf-01
          domain: vmm_l3dom
          ospf:
            area: 0
            area_type: regular

          node_profiles:
            - name: border-leafs
              nodes:
                - node_id: 1301
                  router_id: 101.2.1.1

              interface_profiles:
                - name: mil_3_pod_1_vmm
                  ospf:
                    policy: floating-l3out-to-csr

                  interfaces: # floating SVI
                    - node_id: 1301
                      vlan: 500
                      floating_svi: true
                      ip: 172.16.100.1/24
                      paths:
                        - vmware_vmm_domain: mil_3_pod_1_vmm
                          floating_ip: 172.16.100.3/24

          external_endpoint_groups:
            # - name: all-ext-subnets
            #   contracts:
            #     consumers:
            #       - to-firewall-pbr
            #   subnets:
            #     - prefix: 0.0.0.0/1
            #     - prefix: 128.0.0.0/1

            - name: 10.1.3.0
              contracts:
                consumers:
                  - permit-to-esg-production
                  - permit-to-esg-development
              subnets:
                - prefix: 10.1.3.0/24

            - name: 10.1.4.0
              contracts:
                consumers:
                  - permit-to-esg-production
                  - permit-to-esg-development
              subnets:
                - prefix: 10.1.4.0/24

      policies:
        ospf_interface_policies:
            - name: floating-l3out-to-csr
              network_type: p2p

CSR 8000v Configuration

The following is an example config used to setup connectivity to the external 10.1.3.0/24 and 10.1.4.0/24 subnets.

CSR 8000v configuration
!
interface GigabitEthernet2
description Towards aci fabric
mtu 9000
ip address 172.16.100.105 255.255.255.0
ip ospf network point-to-point
ip ospf mtu-ignore
ip ospf 1 area 0
negotiation auto
!
interface GigabitEthernet3
description external endpoint
ip address 10.1.3.254 255.255.255.0
ip ospf 1 area 0
negotiation auto
!
interface GigabitEthernet4
description external endpoint
ip address 10.1.4.254 255.255.255.0
ip ospf 1 area 0
negotiation auto
!
router ospf 1
router-id 104.239.1.1
!

Resources

Comments