The Forgotten Need for Network Observability in the Rush to Migrate to the Cloud
2024-6-21 02:12:51 Author:查看原文) 阅读量:6 收藏

The Forgotten Need for Network Observability in the Rush to Migrate to the Cloud

by Martin Roesch

As enterprises embrace a multi-cloud strategy, the top use case is apps siloed on different clouds which increased to 57%, up from 44% last year. So, when it comes to cloud security, it makes sense for enterprises to focus on app security right away. 

Using the “before, during, and after” phases of the threat continuum, SecOps and CloudOps teams start by considering what to do “before” an attacker shows up. The first task is to make it hard to break into a network by deploying cloud security posture management (CSPM) to make sure our configurations are correct and that we don’t have any vulnerable software in our workloads. We also add tools like cloud-native access point platforms (CNAPPs) and cloud workload protection platforms (CWPP) to try to detect and block potential attacks with access controls and hygiene for the workload environment. 

However, that still leaves gaps in security in the “during and after” phases, where the value of network-level monitoring and detection in the cloud is also a critical capability. Understanding the interconnections between things we don’t control – how workloads are interoperating with each other, how VPCs are interoperating with each other, and how clouds are interoperating with each other – requires network-level observability. Here’s why.

Challenges discovering what should never happen

There’s an inherent problem with simply observing cloud workloads and relying on the self-reporting loop where the workload is responsible for telling us that it has been successfully compromised by generating logs. Attackers know this fact well and aren’t shy about silencing systems that might report their presence as they take over a system. Log management and analytics systems generally have a hard time discovering things that should never happen because they typically look for specific signs of attacks and are much less apt to notify of changes in network activity that could indicate a successful compromise. It’s also difficult to know with a high degree of certainty if what we are experiencing is actually happening or if it is a response to a stimulus by a sophisticated attacker that has taken control of a machine. 


Techstrong Podcasts

There aren’t any easy and reliable ways to see how workloads are interoperating with each other at scale without looking at the network level. This was one of the drivers for network security as a discipline originally. Early on in the history of the security industry, much of the functionality of trying to provide security capabilities consisted of log analytics and agents running on devices. This approach was limited because agents couldn’t be deployed everywhere, couldn’t be relied upon to report accurately in the wake of a breach, and had to be constantly curated across broad deployment footprints, a constant cost incurred by their use. Log analytics could only operate on the data produced by the systems under attack and only with the data that they produced – if they produced any data at all. The network is the common ground upon which all modern computing relies and, as such, is a fundamental source of truth for the activities between its various participants and can be monitored at massive scale for everything that is connected to it no matter what the underlying systems are. As a result, it is an excellent place to monitor both for fundamental compromises as well as activities that are contextually “incorrect” because they should never happen in an organization under any circumstances.

Additional complexity in the cloud

As we migrate to the cloud, the network hasn’t gone away. In fact, it is more difficult to monitor than on-prem due to the shared security model and limits cloud providers have placed on legacy security tools. In addition to management scalability problems when running agents on devices to look at packet streams, there’s the expense of the compute horsepower and memory required to do inspection in the first place, not to mention any additional cost that might be incurred if packet decryption is necessary.

The cloud is also not friendly to doing traditional packet analysis at scale because the notion of having broad access to packets at ingress/egress points of entire “subnets” is not something that’s supported, so it’s challenging to field effectively. Packet tap aggregators for the cloud exist but are extremely expensive and difficult to operate and the traffic they aggregate will still need decryption. The TCO of going from 20 to 50 to hundreds or thousands of sensors across large enterprise networks is untenable. And again, we’re typically looking through a very narrow pipe, so we are only able to see a very limited set of things, and certainly not across clouds. 

Without an equally comprehensive level of observability everywhere, organizations may be exposed to potential risks because if an attacker compromises one cloud environment, they may be able to move laterally within and between clouds, and even to on-prem infrastructure.

The power of network-level observability across your entire network

Being able to monitor, detect, investigate, and respond to activity that should never happen in the cloud and on-prem is essential for risk mitigation. The Netography Fusion™ platform gives you a chance to do that at scale from a vantage point that enables you to cover a lot of territory without the heavy lift of numerous points of inspection. 

Instead of looking at packets, you can observe all network traffic by analyzing metadata in the form of flow data across your hybrid multi-cloud environment. Not only do you have a more reliable and comprehensive picture of what is happening, but you also get a lot of bang for your buck. The data is brought into our cloud-based analytics backend where you can immediately and cost-effectively scale up capabilities without needing to deploy any additional infrastructure. Plus, the ability to see across hybrid multi-cloud domains at the same time allows you to detect patterns and trends and get answers to more meaningful questions

Complementing your “before” tools to protect cloud apps, with network-level observability in the “during and after” phases across your multi-cloud and on-prem environment delivers protection in each phase of the threat continuum. As you continue to migrate to the cloud, Netography Fusion provides the opportunity to observe and comprehend what’s happening minute-to-minute as compromises unfold and detect activities that are otherwise hard to detect so you can respond.

The post The Forgotten Need for Network Observability in the Rush to Migrate to the Cloud appeared first on Netography.

*** This is a Security Bloggers Network syndicated blog from Netography authored by Martin Roesch. Read the original post at: