Networking and security technologies at the packet level, such as stateful inspection firewalls, IPSEC, and load balancing, impose lower computational demands in terms of the number of CPU cycles required for each packet. Furthermore, the processing per packet is highly consistent, simplifying performance prediction.
In today’s landscape, security functions (e.g., FWaaS) are delivered as services by service providers who deploy these functions in the Cloud/Points of Presence (PoPs). To cater to multiple tenants, the underlying security technology implementations leverage a Virtual Routing and Forwarding (VRF) tenancy model. Under this model, traffic from multiple tenants traverses the same security device or container/process, effectively addressing challenges related to overlapping IP addresses among tenants. Tenant traffic is identified either through tunnel interfaces or other mechanisms, and specific configurations tailored to each tenant, such as tenant-specific security policies, are then applied accordingly.
To mitigate any potential “noisy neighbor” issues, packet rate limiting is applied at the ingress on a per-tenant basis. This strategy guarantees that the security performance of each individual tenant remains unaffected by the activities of other potentially problematic tenants. Given the consistent per-packet processing, rate limiting proves effective in ensuring equitable processing treatment for all tenants.
Another significant concern for organizations is the potential leakage of sensitive data resulting from the exploitation of vulnerabilities within shared processes or containers by malicious packets from other tenants. One argument often presented by security service providers is that the processing on a per-packet basis is straightforward, reducing the likelihood of vulnerabilities and corresponding exploitation. It is indeed true that packet-level security technologies are simpler, and this argument has some validity.
Both challenges mentioned earlier, namely the “noisy neighbor” problem and “shared resource vulnerabilities,” may not pose significant issues for packet-level security technologies that utilize shared processes. However, we believe that these challenges can be more pronounced and substantial for SASE (Secure Access Service Edge) or SSE (Secure Service Edge) security technologies.
SASE/SSE (Secure Access Service Edge/Secure Service Edge) security technologies transcend traditional packet-level security, offering a comprehensive suite of features:
Now, let’s delve into the execution differences between SASE/SSE and packet-level security technologies:
In summary, SASE/SSE security offers a comprehensive security framework beyond packet-level security, but it introduces complexities and challenges related to variable compute usage, intricate processing, and shared resources. Maintaining robust security in such environments is critical to safeguard against performance challenges AND data breaches & privacy violations.
Organizations undoubtedly value the rationale behind SASE/SSE providers employing shared processes for multiple tenants. This approach efficiently utilizes compute resources among tenants, contributing to sustainability and cost-effectiveness. Service providers can, in turn, pass on these cost savings to their customers.
However, certain industry segments are reluctant to accept the security risks associated with multi-tenancy architecture and shared processes. Some organizations may anticipate future needs for a more risk-averse approach. In such cases, organizations should seek SASE/SSE services that offer flexibility, providing options for both shared processes and dedicated processes/containers.
Dedicated execution contexts with dedicated processes/containers for traffic processing, can effectively address the challenges outlined in the previous section:
As we look ahead, some organizations are becoming increasingly aware of the growing importance of confidential computing. This awareness is particularly relevant in the context of TLS inspection and the management of numerous sensitive data, including secrets and passwords, within SASE/SSE services. A recurring concern revolves around the possibility that personnel with access to the server infrastructure, including service provider staff, might gain unauthorized access to the memory of processes and containers. Additionally, even attackers who manage to exploit server operating systems may potentially breach the memory of these containers and processes. This concern becomes more pronounced in situations where services are available in multiple Points of Presence (POPs) across different countries with varying levels of legal definitions and implementations.
Modern processors, such as those equipped with Intel Trust Domain Extensions (TDx), offer advanced features for trusted execution. These technologies play a crucial role in ensuring that even infrastructure administrators or attackers with elevated privileges cannot decipher the memory content, as it remains securely encrypted by TDx hardware.
SASE/SSE providers that offer dedicated execution contexts are better positioned to provide this essential confidentiality feature compared to others. Therefore, organizations are strongly advised to consider providers that offer the flexibility of both shared processes and dedicated execution contexts. This flexibility will help future-proof their risk mitigation strategies and ensure the highest level of data security in evolving landscapes.
The Aryaka CTO Insights blog series provides thought leadership for network, security, and SASE topics. For Aryaka product specifications refer to Aryaka Datasheets.
The post Choosing the Unified SASE Provider: The Execution Isolation Factor appeared first on Aryaka.
*** This is a Security Bloggers Network syndicated blog from Srini Addepalli, Author at Aryaka authored by Srini Addepalli. Read the original post at: https://www.aryaka.com/blog/choosing-the-unified-sase-provider/