Skip to content

ACI Firewall Configuration Example - Classic Routed Firewall with VRF Sandwich

Estimated time to read: 11 minutes

  • Originally Written: October, 2022

Info

This post uses the Nexus as Code (NaC) project which makes it very easy to configure ACI fabrics through a YAML file . More details and examples can be found at https://developer.cisco.com/docs/nexus-as-code/#!aci-introduction

Example scenario

  • ACI integration with a routed firewall with classic VRF sandwich topology
  • ASAv is configured in routed mode
  • This example will use two ACI tenants (DMZ and Production) with each peering to the same ASAv via an L3Out
  • There is no VRF route leaking or global contracts
  • All traffic between the DMZ and Production tenants will be via the ASAv

Network Traffic Flow

This example provides network connectivity through the firewall without requiring a service graph

  • Q: How do we force traffic from 192.168.10.0 to 192.168.20.0 to use the firewall?
    • Only show them the paths you want them to use and hide the rest i.e. use multiple VRFs and L3Outs

What is a VRF?

Virtual Routing and Forwarding (VRF) enables you to have multiple isolated routing tables within the same router or firewall

Why do I need multiple VRFs?

VRFs allow you to have more than one routing table. By default, an EPG/BD can only see the routes within it's own VRF e.g. EPG 192.168.10.0 in the dmz VRF can't see a route to 192.168.20.0 in the production VRF

  • Q: How can we create a route in the dmz VRF so that EPG 192.168.10.0 knows how to get to 192.168.20.0 ?

    • Advertise 192.168.20.0 from the production VRF to an external device and have the external device tell the dmz VRF how to get there. i.e. it can get there through the external device (ASA firewall)
  • Q: How do we advertise subnets to the external device?

    • An L3Out from each Tenant/VRF peers with one interface of the ASA firewall

What is an L3Out?

The Layer 3 Out (L3Out) in ACI provides connectivity to another device via routing. In many cases an L3Out is the connection to an external network/campus/Internet etc, in reality an L3Out is just routing between two devices. For example:

  • Sharing routes with WAN router for application to user connectivity
  • Sharing routes with "stub" firewall for connectivity between VRFs

Configuration

Note

The ACI configuration below is assuming the vCenter integration (VMM, VLAN pool, and AAEP) has already been setup

Nexus as Code - DMZ

Configuration
---
apic:
  tenants:
    - name: conmurph-vrf-sandwich-dmz

      vrfs:
        - name: dmz

      bridge_domains:
        - name: 192.168.10.0
          vrf: dmz
          l3outs:
            - to-fw-dmz
          subnets:
          - ip: 192.168.10.254/24
            public: true

      application_profiles:
        - name: dmz
          endpoint_groups:
            - name: 192.168.10.0
              bridge_domain: 192.168.10.0
              contracts:
                consumers:
                  - firewall-to-dmz
              vmware_vmm_domains:
                - name: DM_VMM
                  delimiter: '|'
                  deployment_immediacy: immediate
                  resolution_immediacy: immediate

      filters:
        - name: icmp
          entries:
            - name: icmp
              ethertype: ip
              protocol: icmp
        - name: web
          entries:
            - name: http
              ethertype: ip
              protocol: tcp
              destination_from_port: http
              destination_to_port: http

      contracts:
        - name: firewall-to-dmz
          subjects:
            - name: icmp
              filters:
                - filter: icmp
            - name: web
              filters:
                - filter: web

      policies:
        ospf_interface_policies:
          - name: ospf-broadcast
            network_type: bcast

      l3outs:
        - name: to-fw-dmz
          vrf: dmz
          domain: L3_FI
          ospf:
            area: backbone
            area_type: regular
            area_cost: 1
            auth_type: none
            policy: ospf-broadcast
            ospf_interface_profile_name: ospf-broadcast
          node_profiles:
            - name: to-fw-dmz_nodeProfile
              nodes:
                - node_id: 101
                  router_id: 98.98.98.101
                  router_id_as_loopback: false
                - node_id: 102
                  router_id: 98.98.98.102
                  router_id_as_loopback: false
              interface_profiles:
                - name: to-fw-dmz
                  ospf:
                    area: backbone
                    area_type: regular
                    area_cost: 1
                    auth_type: none
                    policy: ospf-broadcast
                    ospf_interface_profile_name: ospf-broadcast
                  interfaces:
                    - channel: LPG_FI-A
                      node_id: 101
                      node2_id: 102
                      svi: true
                      vlan: 998
                      ip_a: 172.16.10.1/24
                      ip_b: 172.16.10.2/24
                      ip_shared: 172.16.10.3/24
                      mtu: 1500
                    - channel: LPG_FI-B
                      node_id: 101
                      node2_id: 102
                      svi: true
                      vlan: 998
                      ip_a: 172.16.10.1/24
                      ip_b: 172.16.10.2/24
                      ip_shared: 172.16.10.3/24
                      mtu: 1500
          external_endpoint_groups:
            - name: to-firewall
              subnets:
                - name: all
                  prefix: 0.0.0.0/0
              contracts:
                providers:
                  - firewall-to-dmz

Nexus as Code - Production

Configuration
---
apic:
  tenants:
    - name: conmurph-vrf-sandwich-production

      vrfs:
        - name: production

      bridge_domains:
        - name: 192.168.20.0
          vrf: production
          l3outs:
            - to-fw-production
          subnets:
          - ip: 192.168.20.254/24
            public: true

      application_profiles:
        - name: production
          endpoint_groups:
            - name: 192.168.20.0
              bridge_domain: 192.168.20.0
              contracts:
                providers:
                  - firewall-to-production
              vmware_vmm_domains:
                - name: DM_VMM
                  delimiter: '|'
                  deployment_immediacy: immediate
                  resolution_immediacy: immediate

      filters:
        - name: icmp
          entries:
            - name: icmp
              ethertype: ip
              protocol: icmp
        - name: web
          entries:
            - name: http
              ethertype: ip
              protocol: tcp
              destination_from_port: http
              destination_to_port: http

      contracts:
        - name: firewall-to-production
          subjects:
            - name: icmp
              filters:
                - filter: icmp
            - name: web
              filters:
                - filter: web

      policies:
        ospf_interface_policies:
          - name: ospf-broadcast
            network_type: bcast

      l3outs:
        - name: to-fw-production
          vrf: production
          domain: L3_FI
          ospf:
            area: backbone
            area_type: regular
            area_cost: 1
            auth_type: none
            policy: ospf-broadcast
            ospf_interface_profile_name: ospf-broadcast
          node_profiles:
            - name: to-fw-production_nodeProfile
              nodes:
                - node_id: 101
                  router_id: 99.99.99.101
                  router_id_as_loopback: false
                - node_id: 102
                  router_id: 99.99.99.102
                  router_id_as_loopback: false
              interface_profiles:
                - name: to-fw-production
                  ospf:
                    area: backbone
                    area_type: regular
                    area_cost: 1
                    auth_type: none
                    policy: ospf-broadcast
                    ospf_interface_profile_name: ospf-broadcast
                  interfaces:
                    - channel: LPG_FI-A
                      node_id: 101
                      node2_id: 102
                      svi: true
                      vlan: 999
                      ip_a: 172.16.20.1/24
                      ip_b: 172.16.20.2/24
                      ip_shared: 172.16.20.3/24
                      mtu: 1500
                    - channel: LPG_FI-B
                      node_id: 101
                      node2_id: 102
                      svi: true
                      vlan: 999
                      ip_a: 172.16.20.1/24
                      ip_b: 172.16.20.2/24
                      ip_shared: 172.16.20.3/24
                      mtu: 1500
          external_endpoint_groups:
            - name: to-firewall
              subnets:
                - name: all
                  prefix: 0.0.0.0/0
              contracts:
                consumers:
                  - firewall-to-production

Client and Server VM Deployment

2 x Ubuntu 20.04 VMs configured with its respective IP address and the gateway of the bridge domain.

  • VM Name: conmurph-asa-client-1
    • IP: 192.168.10.1/24
    • GW: 192.168.10.254/24
  • VM Name: conmurph-asa-server
    • IP: 192.168.20.1/24
    • GW: 192.168.20.254/24

Attach each VM to the port group created in the previous section

  • VM Name: conmurph-asa-demo-client-1
    • Portgroup: conmurph-vrf-sandwich-dmz|dmz|192.168.10.0
  • VM Name: conmurph-asa-demo-server
    • Portgroup: conmurph-vrf-sandwich-production|production|192.168.20.0

From each VM, verify that you can ping the default gateway

  • VM Name: conmurph-asa-demo-client-1
    • GW: 192.168.10.254
  • VM Name: conmurph-asa-demo-server
    • GW: 192.168.20.254

ASAv Deployment

As per the note at the top, this example is assuming you have already configured a VMM, VLAN Pool, and AAEP to connect to your VMware environment. The same VMM, VLAN Pool, AAEP configuration is used to connect the ASAv​​​​​​​.

I followed the same steps shown in this video to deploy the ASAv

https://www.youtube.com/watch?v=JUUk4h22pHA

Edit the VM settings for the ASAv

  • Configure the Management0 interface to allow SSH connectivity
    • In our lab I have DHCP configured to assign an ip to the mgmt0 interface
  • Select standalone mode
  • Select routed mode

Note

The Gigabit0-0 and Gigabit0-1 interfaces will be used to connect to the client and server VMs however these will be configured at a later stage

Configure the ASAv

Note

This is example configuration to allow traffic to pass and is not production ready. For example, the security level for inside and outside interfaces is the same, and the access lists permit all traffic

Configuration
conmurph-asav-routed-1# show run
: Serial Number: 9APVW8HMGKK
: Hardware:   ASAv, 2048 MB RAM, CPU Xeon E5 series 2600 MHz
:
ASA Version 9.16(3)19
!
hostname conmurph-asav-routed-1

!
interface GigabitEthernet0/0
nameif outside
security-level 100
ip address 172.16.10.100 255.255.255.0
ospf authentication null
!
interface GigabitEthernet0/1
nameif inside
security-level 100
ip address 172.16.20.100 255.255.255.0
ospf authentication null
!
interface Management0/0
no management-only
nameif management
security-level 0
ip address dhcp
!
same-security-traffic permit inter-interface
!
access-list inside_access_out extended permit ip any4 any4
access-group inside_access_out in interface inside
!
access-list outside_access_in extended permit ip any4 any4
access-group outside_access_in in interface outside
!
router ospf 1
network 172.16.10.0 255.255.255.0 area 0
network 172.16.20.0 255.255.255.0 area 0
log-adj-changes
​​​​​​​!
route management 0.0.0.0 0.0.0.0 10.1.100.254 1
!
: end

Add VLANs to UCS and ESXi for L3Out

Note

This section will probably differ depending on your own environment.

The example is using an ASAv running on UCS which is connected to the ACI Fabric through Fabric Interconnects. We need to therefore create an L3 Domain to connect to the VMware environment (if one has not been setup) and in our lab we are reusing the same AAEP that connects to the VMM Domain (DM_VMM).

  • Create new VLANs in the LAN section of UCS Manager
    • VLAN Name/Prefix: conmurph-L3Out-VLAN-
    • VLAN IDs: 998, 999
  • Add the new VLANs to the vNIC adapters for the servers hosting the ASAv
  • In ESXi manually create two new port groups on the DVS of the servers hosting the ASAv
    • Portgroup
      • Name: conmurph-L3Out-VLAN-998
      • VLAN ID: 998
    • Portgroup
      • Name: conmurph-L3Out-VLAN-999
      • VLAN ID: 999
  • Edit the settings for the ASAv VM and update the Network Adapter 2 and Network Adapter 3
    • Network Adapter 2: conmurph-L3Out-VLAN-998
    • Network Adapter 3: conmurph-L3Out-VLAN-999

Verify

Resources

Comments