Exploring Google Kubernetes Engine (GKE) Security

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
L2 Linker

Palo Alto Networks Live Community reveals how the Google Kubernetes Engine brings our latest innovations to accelerate your time to market. Learn how to secure your Google Kubernetes Engine (GKE) on GCP in five critical areas.

 

Google Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations and open source flexibility to accelerate your time to market. Google launched the Kubernetes Engine in 2015. Kubernetes Engine builds on Google’s experience of running services like Gmail and YouTube in containers for over 12 years. 

As enterprises create more containerized workloads, security must be integrated at each stage of the build and deployment lifecycle. In this blog, you can learn how to secure your Google Kubernetes Engine (GKE) environment on GCP in five critical areas.

AUDIT LOGGING & MONITORING

 

Enable Stackdriver Logging

Stackdriver Logging lets Kubernetes Engine automatically collect, process and store your container and system logs in a dedicated, persistent datastore. By enabling Stackdriver Logging, you will have container and system logs. Kubernetes Engine deploys a per-node logging agent that reads container logs, adds helpful metadata and then stores them. The logging agent checks for container logs in the following sources:

 

  • Standard output and standard error logs from containerized processes
  • Kubelet and container runtime logs
  • Logs for system components, such as VM startup scripts

Stackdriver Logging is compatible with JSON and glog formats. Logs are stored for up to 30 days.

 

Enable Stackdriver Monitoring

Stackdriver Monitoring helps in monitoring signals and building operations in your Kubernetes Engine clusters. Stackdriver Monitoring can access metrics about CPU utilization, disk traffic metrics, network traffic and uptime information. By enabling Stackdriver Monitoring, you will have system metrics and custom metrics. System metrics are measurements of the cluster's infrastructure, such as CPU or memory usage. For system metrics, Stackdriver creates a deployment that periodically connects to each node and collects metrics about its pods and containers, then it sends the metrics to Stackdriver. Metrics for usage of system resources are collected from the CPU, Memory, Evictable memory, Non-evictable memory and Disk sources.

AUTHENTICATION & AUTHORIZATION

 

Disable Legacy Authorization

The legacy authorizer in Kubernetes Engine grants broad, statically defined permissions. To ensure that role-based access control (RBAC) limits permissions correctly, you must disable the legacy authorizer. RBAC has significant security advantages. It can help you ensure that users only have access to cluster resources within their own namespace and is now stable in Kubernetes.

 

Disable Basic Authentication

Basic authentication allows a user to authenticate to the cluster with a username and password, and it is stored in plain text without any encryption. Disabling basic authentication will prevent attacks like brute force. It's recommended to use either client certificate or IAM for authentication.

 

Enable Client Certificate

A client certificate is a base64-encoded public certificate used by clients to authenticate to a cluster endpoint. Use client certificates to authenticate a cluster instead of basic authentication methods that are without any encryption and might lead to brute force attacks.

 

Default Service Account is Not Used for Project Access

By default, Kubernetes Engine nodes are given the Compute Engine default service account. This account has broad access by default, making it useful to a wide variety of applications, but it has more permissions than are required to run your Kubernetes Engine cluster. You should create and use a minimally privileged service account to run your Kubernetes Engine cluster instead of using the Compute Engine default service account.

 

Kubernetes Clusters Created with Limited Service Account Access Scopes for Project Access

If you are not creating a separate service account for your nodes, you should limit the scopes of the node service account to reduce the possibility of a privilege escalation in an attack.

 

Disable the Kubernetes Web UI (Dashboard)

Dashboard is a web-based Kubernetes user interface and is backed by a highly privileged Kubernetes Service Account. The Kubernetes web UI (Dashboard) does not have admin access by default in Kubernetes Engine. The Cloud Console also provides much of the same functionality, so you don't need these permissions.


NETWORK SECURITY

 

Enable Authorized Networks for Master Access

Authorized networks are a way of specifying a restricted range of IP addresses that are permitted to access your container cluster's Kubernetes master endpoint. This provides you the flexibility to administer your cluster from anywhere. However, you might want to further restrict access to a set of IP addresses that you control. You can set this restriction by specifying an authorized network. Kubernetes Engine uses both Transport Layer Security (TLS) and authentication to provide secure access to your container cluster's Kubernetes master endpoint from the public internet.

 

Enabling Master authorized networks can provide additional security benefits for your container cluster, including:

 

  • Better Protection from Outsider Attacks – Authorized networks provide an additional layer of security by limiting external, non-GCP access to a specific set of addresses you designate, such as those that originate from your premises. This helps protect access to your cluster in the case of a vulnerability in the cluster's authentication or authorization mechanism.
  • Better Protection from Insider Attacks – Authorized networks help protect your cluster from accidental leaks of master certificates from your company's premises. Leaked certificates used from outside GCP and outside the authorized IP ranges are still denied access (e.g., from addresses outside your company).

 

Enable NetworkPolicy for Pods Secure Communication

A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules that specify what traffic is allowed to the selected pods. By default, pods are non-isolated. They accept traffic from any source. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy.

 

Kubernetes Cluster is Created with Alias IP Ranges Enabled

With Alias IPs ranges enabled, Kubernetes Engine clusters can allocate IP addresses from a CIDR block known to Google Cloud Platform. This makes your cluster more scalable and allows your cluster to better interact with other GCP products and entities. Using Alias IPs have several benefits:

 

  • Pod IPs are reserved within the network ahead of time, which prevents conflict with other compute resources.
  • The networking layer can perform anti-spoofing checks to ensure that egress traffic is not sent with arbitrary source IPs.
  • Firewall controls for pods can be applied separately from their nodes.
  • Alias IPs allow pods to directly access hosted services without using a NAT gateway.

 

Kubernetes Cluster is Created with Private Cluster Enabled

A private cluster is a cluster that makes your master inaccessible from the public internet. In a private cluster, nodes do not have public IP addresses, so your workloads run in an environment that is isolated from the internet. Nodes have addressed only in the private RFC 1918 address space. Nodes and masters communicate with each other privately using VPC peering. With a private cluster enabled, VPC network peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:

  • Network Latency – Public IP networking suffers higher latency than private networking.
  • Network Security – Service owners do not need to have their services exposed to the public internet and deal with its associated risks.
  • Network Cost – GCP charges egress bandwidth pricing for networks using external IPs to communicate even if the traffic is within the same zone. If the networks have peered, however, they can use internal IPs to communicate and save on those egress costs. Regular network pricing still applies to all traffic.

 

Private Google Access is Set on Kubernetes Engine Cluster Subnets

Private Google Access enables your cluster hosts, which only have private IP addresses, to communicate with Google APIs and services using an internal IP address rather than an external IP address. Internal (private) IP addresses are internal to Google Cloud Platform and are not routable or reachable over the internet. You can use Private Google Access to allow VMs without internet access to reach Google APIs, services and properties that are accessible over HTTP/HTTPS.

 

IP Rotation for Master

You can perform an IP rotation to change the IP address that your cluster's Kubernetes master uses to serve requests from the Kubernetes API. IP rotation also changes the SSL certificate and cluster certificate authority, so there is no externally-visible connection between the previous address and the new one.

 

Ensure Kubernetes Cluster Master is Configured with Credential Rotation

You can perform a credential rotation to revoke and issue new credentials for your cluster. Google recommends that you use credential rotation regularly to reduce credential lifetime and further secure your Kubernetes Engine cluster. In addition to rotating credentials, credential rotation also performs an IP rotation.

 

Ensure HTTP Load Balancing is Enabled

Enabling HTTP/HTTPS load balancing provides global load balancing for HTTP/HTTPS requests destined for your instances. It also has a security advantage since HTTP/HTTPS load balancers will let the Kubernetes Engine terminate unauthorized HTTP/HTTPS requests and make better context-aware load balancing decisions.

NODE SECURITY

 

Enable Automatic Node Repair for Kubernetes Cluster Nodes

Kubernetes Engine's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, Kubernetes Engine makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, Kubernetes Engine initiates a repair process for that node.

 

Enable Automatic Node Upgrades for Kubernetes Cluster Nodes

Node auto-upgrades help you keep the nodes in your cluster or node pool up to date with the latest stable version of Kubernetes. When the upgrade is performed, the node pool is upgraded to match the current cluster master version. Some benefits of using enabling auto-upgrades are:

 

  • Lower Management Overhead – You don't have to manually track and update to the latest version of Kubernetes.
  • Better Security – Sometimes new binaries are released to fix a security issue. With auto-upgrades, Kubernetes Engine automatically ensures that security updates are applied and kept up to date.
  • Ease of use – Provides a simple way to keep your nodes up to date with the latest Kubernetes features.

 

Container-Optimized OS (cos) is Used for Kubernetes Engine Clusters Node Image

The Container-Optimized OS node image is based on a recent version of the Linux kernel and is optimized to enhance node security. It is backed by a team at Google that can quickly patch it for security and iterate on features. The Container-Optimized OS image provides better support, security, and stability than previous images.
Enabling Container-Optimized OS provides the following benefits:

 

  • Run Containers Out of the Box – Container-Optimized OS instances come pre-installed with the Docker runtime and cloud-init. With a Container-Optimized OS instance, you can bring up your Docker container at the same time you create your VM, with no on-host setup required.
  • Smaller Attack Surface – Container-Optimized OS has a smaller footprint, reducing your instance's potential attack surface.
  • Locked-Down by Default – Container-Optimized OS instances include a locked-down firewall and other security settings by default.
  • Automatic Updates – Container-Optimized OS instances are configured to automatically download weekly updates in the background. Only a reboot is necessary to use the latest updates.

 

PodSecurityPolicy Controller is Enabled on the Kubernetes Engine Clusters

A PodSecurityPolicy is a cluster-level resource that controls security sensitive aspects of the pod specification. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system as well as defaults for the related fields. PodSecurityPolicy specifies a list of restrictions, requirements and defaults for pods created under the policy.

 

Unsupported Master Version

The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler and core resource controllers. The master's lifecycle is managed by Kubernetes Engine when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the cluster master, which Kubernetes Engine performs automatically or manually at your request if you prefer to upgrade earlier than the automatic schedule.

 

Unsupported Node Version

Kubernetes Engine does not support running node versions more than two minor versions behind the master version. For example, if the cluster master is running version 1.8, nodes should be running at least version 1.7 or 1.6. Nodes may not run a newer version of Kubernetes than your cluster master.

 

Ensure Kubernetes Cluster Version is Updated

It is recommended to use the latest supported Kubernetes version currently available on Kubernetes Engine in the cluster's zone or region.


BILLING

 

Kubernetes Clusters are Configured with Labels

A cluster label is a key-value pair that helps you organize your Google Cloud Platform resources, such as clusters. You can attach a label to each resource, then filter the resources based on their labels. Information about labels is forwarded to the billing system, so you can break down your billing charges by the label.

 

The CIS benchmark includes several important security controls including authentication, network security controls and Pod Security Policies amongst other things. The RedLock security research team played a pivotal role in influencing the CIS committee to include GKE in the benchmark. RedLock supports a majority of the GKE-related security benchmarks and is first in the industry to provide a security benchmark against a fully-managed Kubernetes service in the cloud.


 


For 
a demo, visit RedLock - Palo Alto Networks  to see how RedLock can help you with:

  • CIS Compliance assurance
  • Security governance
  • Security Operations

If you’d like to learn more about our support for GCP CIS, please contact us for more information.

  • 9468 Views
  • 0 comments
  • 5 Likes
Register or Sign-in
Labels