Skip to content

An Introduction to cloud security - from the perspective of an on-prem admin

Estimated time to read: 37 minutes

  • Originally Written: February, 2025

Overview

I have an on-premises data centre (servers, networking, storage) background and this post contains some of my notes when learning about areas of security in public clouds.

I like to think of security using following buckets and although it can be overwhelming, if you're an on-prem admin much of the knowledge we have can apply to the public cloud. There are also a lot of new concepts however you may not need them from day 1. It really depends on your business requirements and priorities.

For example, although it's expensive, you may only run a handful of virtual machines (VMs) and a database in the public cloud. You might not need micro-services, Kubernetes, serverless, Infrastructure as Code etc.

I've written the post with this in mind and will walk through cloud security starting with a tradition on-premises focus but then looking at "cloud-native" or additional cloud and application security areas to be mindful of.

Some cloud provider concepts

Let's start with the cloud provider and network.

The next section will cover a few cloud provider concepts using AWS as an example. This is by no means a replacement for any cloud provider documentation or training, it's just setting up the foundation.

Shared responsibility model

I'm sure you've seen many news headlines about a cloud resource such as an S3 bucket being left wide open and publicly available to anyone. The first thing to understand is that there's always some level of security you need to think about. What you're responsible for depends on what services you're using. e.g. if you're using Infrastructure as a Service (IaaS) it may be the network, firewall, endpoint, client data, while with a managed service it may only be the client or customer data.

Reference: Applying the AWS Shared Responsibility Model to your GxP Solution

References:

Identity management

When working with on-premises equipment you probably have accounts which allow users to access and manage specific hardware or software. This same concept applies in public clouds. There are also best practices which are similar to what we have on-prem.

For example:

  • We shouldn't give everyone admin access to on-prem equipment so why would we do the same for our cloud resources?
    • Don't use admin or root accounts as your day to day account
    • Only provide user or service accounts with the least amount of privileges they require
  • Use multi-factor authentication
  • Cleanup any unused accounts
  • Identity providers such Active Directory are often used to access on-prem and we can also use identity providers with cloud accounts
  • Have a look at landing zones - Landing Zones and Reference Architectures are best practice blueprints and configurations that provide guidance and automated deployment options for setting up secure, scalable, and compliant environments. Each cloud provider has their own landing zone documents and tools
  • CIEM tools can help to identify user entitlement and identity security issues - more on this later

References:

Network and security constructs

Here are some AWS concepts to get started.

VPCs and components

  • Region: Geographically distinct area containing multiple data centers e.g. us-east-1 (N. Virginia)
  • Availability Zone: Isolated location within a region designed to be highly available and fault-tolerant e.g. us-east-1a, us-east-1b, us-east-1c
  • Virtual Private Cloud: Think of this like a container for our components. It holds resources such as VM (EC2) instances, services such as RDS, subnets, gateways, routing tables, and more
  • EC2 instance: Virtual machine
  • Subnet: Segment of a VPC's IP address range where you can place groups of isolated resources within a virtual network
  • Public vs Private Subnet: A public subnet has direct access to the internet via an Internet Gateway, while a private subnet does not and typically uses a NAT gateway for internet access
    • For example, VM (EC2) instances may need to access the internet via the NAT gateway to download software updates without needing a public IP
  • Routing Table: Contains routes which are used to determine where network traffic is directed within a VPC. There can be more than one routing table
  • NAT Gateway: Enables instances in a private subnet to connect to the internet or other AWS services while preventing inbound traffic from the internet
  • Internet Gateway: Highly available component that allows communication between instances in your VPC and the internet
  • Load Balancer:
    • Network Loadbalancer (NLB): Operates on layer 4 for network traffic based on ports and IP addresses
    • Gateway Loadbalancer (GWLB): Integrates with third-party network appliances for deployment and scaling, operating at Layer 3 for traffic inspection and processing
    • Application Loadbalancer (ALB): Provides advanced content-based routing at Layer 7, ideal for HTTP/HTTPS applications requiring host-based and path-based routing

VPC peering and Transit Gateways

  • VPC Peering: You can connect two or more VPCs together, however it requires full mesh so depending on the number of VPCs you have it may not scale well
  • Transit Gateway (TGW): Think of a TGW like a core router in an on-prem network in that it allows you to much more easily connect VPCs together.

ACLs and Security Groups

  • Network ACL: Similar to your on-pre devices ACLs. It controls traffic in and out of one or more subnets. You can configure Allow and Deny rules which is different to the Security Group construct below in which you can only create Allow rules.
  • Security Group: Think of this like a virtual firewall which is connected to the interface of your instance (VM) to control inbound and outbound traffic. Again, this is for Allow rules only so can help provide a zero trust model as long as you don't permit any any. You can also reference one security from another when creating rules. This makes it simpler to allow traffic between security groups without the need to know or memorize subnets or IPs

Tips

  • Try not to use the defaults provided when it comes to security groups
  • Always ensure you've confirmed which traffic is allowed into and outbound from a security group
  • You should never allow all traffic from any source inbound to a security group - It's very easy to link a subnet to an Internet Gateway and you have the potential to allow traffic anyone to access your workloads

References:

Flow telemetry

  • VPC Flow Logs: We can capture packet flows on premises using Netflow or sFlow. In our public cloud environment we can use services such as VPC Flow Logs to capture information about the traffic going to and from network interfaces in your VPC. This can also be ingested by various security products to provide visibility and create security policies.

Reference: https://aws.amazon.com/blogs/aws/learn-from-your-vpc-flow-logs-with-additional-meta-data/

Landing Zones and Reference Architectures

There is a huge amount more to learn about public clouds however the overview we've seen so far should provide a good starting point for the rest of the post. I always find it easier to learn by getting some hands-on experience with the concepts so it makes sense to start creating and configuring resources (just make sure to set a budget!).

However you may be wondering how best to structure all of these concepts (and many more) so that you have an environment suitable for production workloads. This is where a landing zone can help.

Each public cloud provides landing zone reference architectures and tools to help you build a modular, scalable, secure multi-account environment

For example have a look at Figure 1 in the Azure landing zone architecture document for an idea of what a landing zone architecture may consist of.

Reference: Figure 1 https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/

References:

Firewalls

We've already looked at a couple of security concepts available in the network - the Network ACL and the Security Group. You may also want to use firewalls just like you have in your on-prem environment. So the question is, can you deploy what you have on-prem just in a virtualized form factor, or do you need something different in a public cloud?

As always, it depends.

There's no problem to install virtual firewalls (e.g. FTDv or ASAv) into a public cloud, in fact you'll generally find the major public clouds already have each vendors virtual firewalls as part of their marketplaces.

One way I like to think about this comes down to whether you have a very static environment or a very dynamic environment.

Traditional on-premises environments can be very static or slow moving - the virtualization, server, storage, and network infrastructure is typically fixed, well known, and managed by different siloed teams. The locations are fixed to a couple of physical data centers, and if a change is required it can take days or weeks for the responsible teams to approve and implement it.

If this setup is replicated in a public cloud there's no reason you couldn't manually (or with some help from automation) configure and manage one or more virtual firewalls. The environment is locked down to a few regions, minimal changes are required, and workloads are known.

See this Deploy a Cluster for Threat Defense Virtual in a Public Cloud config guide for an example of how to set this up yourself.

However one of the reasons people started using public cloud was because it's so easy to create things. I can login to a cloud providers portal and start using their services across which regions I need. No more submitting a ticket and waiting days or weeks for a network, server, VM, storage to be created. Therefore public cloud tends to be more dynamic with a higher rate of change for resources than on-premises. Additionally, each cloud provider launches new services each year which you may want to use.

In this case, it may make sense to use a platform built for this type of environment and workload. I'll show you an example of one platform below but first, when thinking about whether to deploy your own firewalls or use a more "cloud-native" solution, here are some points to think about.

  • How static or dynamic is the environment?
  • Who's managing the firewall
  • Will they need to work with another team to stitch the traffic into the firewall?
  • Are you deploying across multiple regions/cloud providers?
  • Is this environment separate from on-premises?
  • Do you think you'll need to scale up or down depending on the traffic?
  • How will you onboard new services or VPCs?
  • Do you need to integrate with "cloud-native" services such as AWS CloudFront or AWS S3?

Multicloud Defense

There are a number of security platforms available that have been built for public cloud workloads. I've been working with Multicloud Defense so will highlight a couple of points of what's available. First, it's a SaaS platform (part of Cisco Security Cloud Control) designed specifically for organizations with complex cloud environments. From the portal you can deploy one or more gateways (think of them like L4-L7 firewalls) into the cloud provider and region of your choice. The gateways are built to be highly available, auto-scaling, and provide an auto-recovery capability.

One thing to remember is that if you deploy virtual firewalls yourself you'll have to take care of redundancy and high availability.

Additionally, you'll need to configure the environment to forward traffic into the firewalls. The exact configuration would depend on the architecture but here are examples of what resources you may need.

  • Security VPCs - for centralized security architecture (see below for example)
  • VPC and DNS Flow Log capture - you could either pull these records into the virtual firewall or security appliance if it's supported, or you may just want to capture flow logs in the cloud provider to look at later using their tools
  • Network Load Balancer - This is to allow ingress traffic from an external client to reach one of the firewalls
  • Gateway Load Balancers (GWLB) - To allow egress traffic from a client in the cloud to be sent to one of the firewalls
  • GWLB endpoints - Enables you to route traffic to a service that you’ve configured using the GWLB
  • PrivateLink - Allows you to send traffic from the application VPC to the GWLB without having to use a public internet gateway, NAT gateway, VPN
  • Firewalls (Deployment, Insertion, Autoscaling)
  • AWS Transit Gateways (New or existing, TGW attachment)
  • Application VPC subnet routing to TGW

Again, there is no issue setting this up yourself and it may make sense if it's a static environment where only small infrequent changes are required. Here's an example to Deploy a Cluster for Threat Defense Virtual in a Public Cloud

In the case of Multicloud Defense, the configuration above such as deploying the gateways, load balancer, VPCs, routing is configured automatically for you.

Besides the automated gateway deployment and infrastructure configuration, there are a number of other benefits that these platforms provide.

By connecting to each public cloud, they're able to provide continuous real-time discovery of multicloud networks, workloads, and resources.

They can connect to security intelligence feeds to identify where potential issues may exist. In this example we can see that a VM in our VPC is connecting to a known malicious IP. Multicloud Defense receives feeds from Cisco Talos and other sources such as the OWASP Core Rule Set for the Web Application Firewall capability.

We can then create various security policies, just like we can with a regular firewall, and push these to any gateway that's been deployed.

Minimizing manual tasks

One of the benefits of using a platform built for public clouds is the focus not just on the firewalling policies but also the integration with the cloud environment itself.

As an example of what I mean, imagine you have an application hosted in an AWS S3 bucket. The application would be delivered by AWS CloudFront which is a Content Distribution Network. We want to insert a Multicloud Defense gateway in between CloudFront and S3 to apply security policies.

To do that we need to know the source IPs addresses (i.e. the Cloudfront IPs) to use in our Reverse Proxy policy. We could track these manually but since they're publicly available, Multicloud Defense can track them for us.

Reference: https://www.youtube.com/watch?v=cM8mrxqObMo

Here we can create an address object and then select the service we want to track, in this case it's Cloudfront. Multicloud Defense then dynamically pulls in any published Cloudfront IP.

We then use that address object in our reverse proxy policy without ever having to worry about individual IP addresses. If you've worked with Kubernetes before, think of this like a Kubernetes service which tracks pod endpoint IPs.

It's also not just Cloudfront, we can create objects for many different cloud services.

Endpoint security

This starts the evolution of acronyms we'll cover, with the first one being CWPP, or Cloud Workload Protection Platform. In the 2010s there was a thought not to remove security from networking or firewalling but to expand into the endpoint. This was focused on three capabilities:

  • Visibility of what's running on the endpoint e.g. processes
  • Micro-segmentation within the endpoint rather than in the network
  • Vulnerability visibility and management i.e. which packages are installed and do they have a known vulnerability

Implementing these functions is similar to the previous section. First we need data and visibility, then we need to analyse and understand what the data means, and then we can create security policies.

There are multiple ways to gather data but two options are to install an agent on the endpoint or gather data through an API of a resource (e.g. AWS API). Having multiple approaches doesn't mean we need to select only one. There are benefits of each so it makes sense to use multiple options to collect data. Additional context could be provided from other sources such as inventory files, CMDBs, vulnerability databases.

  • Agent:
  • + Provides detailed insights into the workload, including process-level monitoring and behavior analysis
  • + More granular micro-segmentation as we can enforce security rules in the endpoint itself (e.g. iptables or firewall rules)
  • - You need to install and manage an agent
  • - The security team using the CWPP is not always the same team maintaining the endpoint so this can cause friction
  • - You need to lifecycle manage the agent - This may be centralized and managed by the CWPP tool
  • - Some appliances (e.g. third party vendors) don't allow you to install an agent
  • - There may be performance related issues
  • Agentless
  • + Nothing to install, nothing to maintain
  • + Allows more granular segmentation compared with a traditional network ACL or firewall (not as granular as an agent though)
  • + Minimal performance impact
  • - Limited visibility e.g. you won't see packages or process from an endpoint
  • - Not as granular security enforcement

Once you've collected the data, you need the platform to analyse and help you understand where potential issues exist, have access to historical forensic reports of the endpoint (see the process tree below for an example), or understand different traffic patterns within the environment.

You can then create and enforce security policies on the endpoint or within the cloud. This may be based on specific network traffic e.g. this endpoint can talk to that endpoint but only using HTTPS traffic, or alternatively you may want to create micro-segmentation policies based on a vulnerability within a package that's installed on the endpoint. For example, if you see a vulnerability for the version of OpenSSH installed on an endpoint you may want to quarantine the endpoint until it can be patched.

There are different options when it comes to enforcement. Typically you want to use platforms which leverage the existing native endpoint tools but automate the process. For example, automatically programming IPTables in Linux or Windows Advanced Firewall rather than running a separate proprietary firewall. As seen in the networking section, the cloud providers already support security groups but can we use the CWP platform to configure and maintain these groups and rules for us?

Similar to the firewall section, there are a number of platforms available however the one I've been using recently has been Cisco Secure Workload (CSW).

CSW is available as a SaaS platform or an on-premises cluster and provides endpoint protection for both cloud and trandition on-prem workloads.

There are multiple options to define micro-segmentation policies however two common approaches are either based on network traffic (source, destination, and protocol) or based on a package vulnerability. There following reference has a good demo video of how this works.

Reference: Software Vulnerability and Adaptive Policy Demo

Enforcement is performed using native tools such as Linux IPTables or the Windows Advanced Firewall. For Kubernetes workloads, Secure Workload uses a daemonset (a pod running on each K8s node) to configure the Kubernetes IPTables rules. CSW can also configure and manage security groups for endpoints which can't have an agent installed or to provide a defense in depth approach to security (more on this shortly)

CSW gathers forensic events through the native tools such as the Linux kernel APIs, Audit and syslog, Windows kernel APIs, Windows events, and AIX audit system. These events include privilege escalation, user logon failures, shellcode executions, file access, socket creations, binaries or libraries which have been changed, network anomalies, or side channel attacks. You're then able to view the forensic events and process details as part of a timeline.

There are a number of benefits with having this level of endpoint visibility - Helps in understanding the root cause of security breaches and aids in refining security policies - You can detect unusual behavior which may be indicative of malicious activities and stop them before they escalate - Security teams can quickly reconstruct events leading up to and following an incident - Many industries have stringent regulatory requirements for logging and monitoring system activities so this can help meet these compliance requirements

As you'll see throughout the post, just because we have one type of security doesn't mean we should throw out the other security components e.g. having endpoint security doesn't mean we remove traditional firewalls. Defense in depth is a cybersecurity strategy that employs multiple layers of security controls and measures to protect information and systems, ensuring that if one layer is breached, others continue to provide protection.

In this case we may want endpoint protection enforced through agents. We could implement security groups and network ACLs to provide additional protection or to protect workloads which aren't running agents. We also then still want our north/south or east/west L4-L7 firewalls for IPS/IDS, URL filtering, FQDN filtering, Web Application protection, Data Loss Prevention, and other functions.

Cloud provider and cloud-native security

Now if you're an on-prem admin and a "traditional" enterprise that is moving to the public cloud, you may have enough places to start with cloud security. Although it can be very expensive, when starting out in cloud many organizations will "lift and shift" their architecture. This means they take what they know and replicate it in a cloud such as AWS. For example they may create some VPCs, subnets, Network ACLs and then, rather than using the native cloud services, they install applications onto VM instances.

In that case, using the security cpabilities already described such as network security, firewalling, and endpoint protection is a very good place to start.

But what about all the "cloud-native" stuff? Let's now have a look at the next level of security in the public cloud and what other tools are available. This brings us back to the evolution of the acronyms.

  • Software Bill of Materials (SBOM): This is a comprehensive inventory detailing the components, libraries, and dependencies that make up an application. Some organizations use them to understand the software's supply chain, identify potential vulnerabilities, manage licenses, and ensure compliance with regulations. There are typically two formats that you'll find for SBOMs.

    • SPDX (Software Package Data Exchange): Open standard developed under the Linux Foundation's OpenChain project, designed to provide a common format for sharing software package data across organizations. SPDX files are typically written tag/value (.spdx), JSON(.spdx.json), YAML(.spdx.yml), RDF/xml(spdx.rdf) and spreadsheets (.xls).
    • CycloneDX: Open standard originally developed for the software security community. Designed to support security use cases such as vulnerability analysis and supply chain risk management. It supports XML, and JSON, and ProtofBuf
  • Cloud Security Posture Management (CSPM): This is focused on detection of security risks and resource misconfigurations in cloud environments (compared to CIEM which is user entitlement focused). For example, a CSPM can identify publicly accessible S3 buckets, report on VMs that do not have encryption enabled or are missing critical security patches, or alert if there are any security groups allowing unrestricted (0.0.0.0/0) access.

  • Cloud Identity and Entitlement Management (CIEM): These tools help you ensure users have appropriate access levels to prevent unauthorized access. For example, are there users who have too many privileges (admin access) to all resources. They can also help in auditing and tracking user access activities for security and compliance.

  • Cloud Native Application Protection Platform (CNAPP): While you can find individual tools providing the capabilities above, a CNAPP is a consolidation of these into a single platform. That means you get CSPM, CWPP, SBOM, CIEM, and more (Kubernetes Security Posture, Infrastructure as Code Scanning), in one product. You might see that as a good direction - you get all of those capabilities in one product. Alternatively it might be a disadvantage - It's trying to do too much and therefore might lack features compared to products that focus on just one area.

You may also find some tools in the categories above include compliance posture assessments with frameworks like MITRE, CIS, and NIST.

Reference: https://softwareanalyst.substack.com/p/redefining-cnapp-a-complete-guide

Although each tool has different features, in many of them you'll find a unified view of security findings, severity, and associated assets.

It's important to think about how these issues should be be resolved. You might see better results if you're proactive with notifying developers or cloud admins of potential issues, rather than relying upon them to login to the security tool themselves. Some of the cloud security tools have native integration with case management tools, but if not you can often build this yourself. For example, you may send an email to a developer or cloud admin based on a potential issue. This could be escalated if an SLA is breached and the issue has not been resolved.

MITRE ATT&CK

Something else you might come across when researching cloud security is the MITRE ATT&CK framework. MITRE is a not-for-profit organization that provides technical expertise across various domains, including cybersecurity. It created a popular framework known as ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge ), which is a comprehensive knowledge base of adversarial tactics and techniques based on real-world observations.

The framework is made up of Tactics, Techniques, and Procedures (TTPs) which help in understanding and categorizing adversarial behavior.

  • Tactics: Represent the "why" of an attack technique. They are the adversarys tactical goals and describe the overall objective they are trying to achieve at different stages of an attack. For instance, in the context of cloud security, tactics could include initial access, persistence, or data exfiltration.
  • Techniques: Are the "how" and describe the specific methods adversaries use to achieve their objectives (tactics). Each technique can have multiple sub-techniques, detailing variations in the way an adversary might implement them. For example, under the tactic of "privilege escalation," a technique in the cloud could be exploiting vulnerable cloud service configurations.
  • Procedures: Are the "details" and refer to the specific implementations of techniques by adversaries in real-world scenarios. They provide the nuanced details about how adversaries execute techniques in practice, often including specific tools or scripts used in attacks.

A number of the CWPP/CSPM/CNAPP vendors integrate the MITRE ATT&CK framework into their products to enhance their threat detection, analysis, and response functionalities.

References: https://attack.mitre.org/resources/

Securing the application - shifting left

Similar to the notion of moving security towards the endpoint, there has been an idea over the past few years to shift security left towards developers. This doesn't mean removing existing security measures but instead of addressing security when the app is in production, shifting left involves embedding security from the beginning of the development process.

There are a number of places and ways this can be implemented and you may come across the following categories of tools.

SAST (Static Application Security Testing)

SAST is a security testing methodology that analyzes source code, bytecode, or binary code for vulnerabilities without executing the program. It can identify potential security flaws early in the software development lifecycle, allowing developers to address issues before the software is deployed.

For an example have a look at Github Code Scanning which when enabled for a repository scans the code in that repository and identifies potential security vulnerabilities or coding errors.

Reference: https://docs.github.com/en/code-security/code-scanning/managing-code-scanning-alerts/triaging-code-scanning-alerts-in-pull-requests

Not just for production apps

This is also a great option if you're creating small scripts, don't think you need to be writing production applications to use code scanning tools. Additionally, if you have some "serverless" or function as a service workloads that are purely code (i.e. not connected to anything else) this can help provide a layer of security before the code is put into production.

DAST (Dynamic Application Security Testing)

DAST is a security testing approach that evaluates applications in their running state. It simulates attacks on a live application to identify vulnerabilities and security weaknesses. DAST does not require access to the source code which makes it useful for finding runtime and environment-related issues before the application is put into production.

IAST (Interactive Application Security Testing)

IAST combines elements of both SAST (code-level analysis) and DAST (runtime behavior analysis) by analyzing applications from within while they are running. Again this is typically done in the testing phase before the application is deployed to production. IAST tools are integrated into the application server or the applications runtime environment. This is typically done by embedding agents within the application code or server which allows you to monitor and interact with the application during execution.

SCA (Software Composition Analysis)

SCA tools are designed to identify and manage open source and third-party components within an application. They focus on detecting known vulnerabilities, license compliance issues, and potential risks associated with these components. The output of the tool is usually a report highlighting the components found, the versions, associated vulnerabilities, and any licensing concerns.

These types of tools are similar to the SBOMs covered earlier but they tend to cover two different areas. SCA focuses specifically on identifying vulnerabilities and license issues in open source and third-party components, while an SBOM provides a broader inventory of all components, regardless of their source. SCA is primarily used for security and compliance management, whereas an SBOM is used for transparency, traceability, and supply chain management.

RASP (Runtime Application Self-Protection)

If you think of SAST, DAST, IAST, and SCA tools being used from development to testing and then staging, RASP tools are integrated directly into an applications runtime environment and provide continuous protection against threats.

As we've seen throughout the post, just because we have another layer of security (this time embedded in the app) doesn't mean we should remove the existing layers. We always want to have a strategy which leverages multiple security measures. You might hear this referred to as Defense in depth.

Take this flow as an example. If a user sends a malicious request we have multiple places to act.

  • We may have a firewall which detects connections from a malicious IP
  • A web application firewall might detect serialized data that may be used in a deserialization attack. The FW and WAF could be the same product or individual devices.
  • If you have endpoint protection you gain visibility into any vulnerable packages, processes that have been started, and any potential forensic events on that endpoint.
  • Finally the embedding of security into the application through the RASP tool provides you greater visibility and security enforcement options. Remember though that it's not always possible to modify the application or run enforcement on the endpoint which is the reason for having multiple layers of security.

Here's an example of what you might see in one of these tools. In this case it's the SecureApplication module in Appdynamics

ASPM (Application Security Posture Management)

You might also come across ASPM tools which Gartner describe as analyzing security signals across software development, deployment and operation to improve visibility, better manage vulnerabilities and enforce controls.. An ASPM can encompass various aspects of application security, including those addressed by other security testing categories like SAST, DAST, IAST, SCA, and RASP.

The major ASPM vendor seems to be ArmorCode but I found this blog https://www.resilientcyber.io/p/the-rise-of-application-security contained a lot of valuable information and a good overview of the ASPM landscape.

Secure Hybrid Connectivity

The final piece of the puzzle is for those organizations who have workloads across multiple environments. This could be between regions, between clouds, or from on-prem to one or more clouds. There are many options available and this post won't be going into details but I like to categorize them in a few ways.

  • Build it yourself vs use a platform to build the connectivity for you
  • Use the cloud providers offerings vs a third party
  • From your premises to the cloud, from your premises to a colo(e.g. Equinix/Megaport), from a colo provider to the cloud

Do it yourself

We saw in the firewall section that there's no problem with deploying and maintaining infrastructure yourself. This is also the case when setting up hybrid or multi-cloud connectivity. You could manually deploy a virtual router or firewall in different public clouds and manually configure tunnels, just like you might setup VPNs between different physical sites.

Alternatively, each cloud provider offers different VPN services and connectivity options such as seen in the Network-to-Amazon VPC connectivity options guide. You could use one of these options to setup the connectivity manually.

If you don't want to use the internet for the connection between on-prem and public cloud you could also use a service such as AWS Direct Connect or Azure Express Route to provide private connection between your data centre and their environment. These typically also support MACSec encryption if required.

https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html

If you haven't come across colocation facilities before, e.g. Equinix, they allow you to rent space, power, and cooling to install your own equipment within a secure managed environment. They also often serve as interconnection hubs where you can establish direct connections to a variety of partners, cloud providers, and internet exchange points.

Software Defined Platforms

Many customers are using SD-WAN solutions to automate and manage their on-premises networks (e.g. branch to campus or campus to DC). These solutions can often be used to also configure secure connections between on-premises and public clouds or to colo facilities such as Equinix. There are also other platforms focusing on multicloud networking such as Megaport, Aviatrix and Alkira.

Summary

Hopefully this has given you a good start to understand some of the concepts, acronyms, and tools available when it comes to public cloud security. This has been my view from taking my on-prem knowledge and applying to it public cloud concepts. It may not necessarily match with someone working purely in cloud. If you're like me and your organization is "lifting and shifting" an architecture into a cloud, although it could be expensive, there should be a good amount of information to get you started.

Comments