Propagation of labels from Pods to VMs in micro-segmentation solution

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

Propagation of labels from Pods to VMs in micro-segmentation solution

L0 Member

Dear all,

I am looking for a Prisma Cloud Enforcer configuration where micro-segmentation security policies at the level of a K8s namespace by design are not blocked by auto-secure rules for VM namespaces without being too permissive at the VM level, i.e. implement segmentation between VMs dynamically in line with the security policies of the Pods and the placement of those Pods on VMs. This could be enabled for example by propagating a pod's labels to its hosting VM, based on identity-based micro-segmentation where no protocol and ports need to be specified in the security rules.
This means that CNI plugins are by default not able to route traffic between worker nodes in a K8s cluster, but when pods, which are allowed to communicate with each other by means of a K8s namespace security rule, are deployed on specific VMs,  only on those specific VMs CNI plugins will be able to route container traffic to its peers, and when pods are removed again from a VM, then this VM is not reachable again.

If this is not the default behavior of a particular Enforcer configuration, is it possible to add a Worker VM to K8s namespace so the security policies of the K8s namespace also propagate to the VM, again assuming rules that do not refer to protocols or ports? If not, is it possible to segment the worker VMs of a  K8s cluster in different VM namespaces statically?

 

I am not looking for a solution such as AWS EKS's security groups for Pods because I am looking for redundant multiple security policies for ultra-reliable applications so that an attacker when escaping a Pod into the VM, and thus escaping all Pod's security groups, is not able to move laterally through worker VMs of the EKS cluster

 

Best regards,

Eddy

1 accepted solution

Accepted Solutions

L1 Bithead

Hi Eddy,

 

We currently have the ability to write rules that apply to kubernetes worker nodes (kubernetes host mode) which you could then combine with rules you write for your pods/services. Based on your use cases you could automate such hybrid rule creation (all functionality is available via APIs). However, in the current iteration of the microsegmentation product, you can not mutate tags for dynamic rules, as this opens a door for an attacker to alter existing security posture.

 

Hope that helps

Julian

Helping protect our customers' digital way of life.

View solution in original post

4 REPLIES 4

L3 Networker

@eddytruyen IMO, your question is NOT relevant to the Prisma Cloud Compute product.

 

Given that your want is a kubernetes-VM solution,

When you determine which K8s Orchestration(s) that you want your solution to support

Then you should approach those resources with your vision

 

K8s Orchestration(s)?  See https://en.wikipedia.org/wiki/List_of_cluster_management_software and https://en.wikipedia.org/wiki/Kubernetes

 

Tommy Hunt AWS-CSA, Java-CEA, PMP, SAFe Program Consultant
thunt@citrusoft.org
https://www.citrusoft.org

L1 Bithead

Hi Eddy,

 

Can you elaborate a bit more on what you are trying to achieve? 

It sounds to me like you would like to have rules for both your kubernetes nodes and your pods.

 

Julian

Helping protect our customers' digital way of life.

Hi Julian,

 

My aim is to research support for redundant firewalls mechanisms at pod and node level for ultra-low latency and high-reliability applications in the 5G era such as Vehicle-to-X

 

I would like to restrict the network attack surface between worker nodes in a K8s cluster: Pods and worker nodes should not be able to scan for vulnerabilities in other worker nodes or pods on those nodes.  This reduces the chance to find powerful service account tokens to hijack the API server of Kubernetes. This is possible in my opinion in a dynamic fashion by only allowing inter-node communication between nodes if pods on those nodes need to communicate. E.g. in an empty cluster none of the worker nodes are allowed to talk to each other. In a fully loaded cluster the network attack surface between worker nodes is for example 60 percent of  the cartesian products over the nodes, instead of the usual 100%

 

It seems that in Prisma cloud the namespaces for pods and namespaces for VMs are separate domains that cannot be nested. If I could nest VMs inside Pod namespaces, I already have a static and auto-security solution for the problem.

 

But maybe a dynamic solution is also possible, where during the tcp fast open phase pod-level permissions are propagated to node-level.

Hopefully, this explains what I try to achieve,


Eddy

L1 Bithead

Hi Eddy,

 

We currently have the ability to write rules that apply to kubernetes worker nodes (kubernetes host mode) which you could then combine with rules you write for your pods/services. Based on your use cases you could automate such hybrid rule creation (all functionality is available via APIs). However, in the current iteration of the microsegmentation product, you can not mutate tags for dynamic rules, as this opens a door for an attacker to alter existing security posture.

 

Hope that helps

Julian

Helping protect our customers' digital way of life.
  • 1 accepted solution
  • 1934 Views
  • 4 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!