DevOps by Default Blog

Kubernetes Namespace Isolation: Beyond the Basics


Running multiple teams on a shared Kubernetes cluster sounds efficient until one team’s runaway pod consumes all the cluster resources. We learned this the hard way.

The Problem

Namespaces provide logical separation but not isolation. By default, pods in one namespace can communicate with pods in any other namespace. A memory leak in the staging namespace can starve production workloads. One team’s misconfigured service can accidentally route traffic meant for another.

The Kubernetes documentation mentions these limitations but doesn’t emphasise them enough. Many organisations discover the gaps only after an incident.

We needed genuine isolation: network boundaries, resource limits, and clear ownership—without spinning up separate clusters for each team.

Our Solution

Network Policies became our first line of defence. We started with a default-deny policy in every namespace, then explicitly allowed required traffic. Calico provided the CNI plugin with robust policy support.

# Default deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Resource Quotas prevented resource hogging. Each namespace received CPU and memory limits based on the team’s allocation. Limit Ranges ensured individual pods couldn’t claim excessive resources.

RBAC restricted who could do what. Teams received admin access to their namespaces only. Cluster-wide operations required elevated permissions.

Pod Security Policies (later replaced by Pod Security Standards) prevented privileged containers and host network access in tenant namespaces.

The Benefits

Incidents became contained. A runaway process in one namespace hits its quota ceiling rather than affecting neighbours. Network policies mean a compromised pod can’t easily pivot to other namespaces.

Teams gained autonomy within their boundaries. They can deploy, scale, and debug without cluster-admin intervention. Clear resource budgets encourage efficient application design.

Capacity planning improved. We can see exactly how much each team uses versus their allocation. Conversations about resource needs became data-driven.

Multi-tenancy isn’t free, but it’s cheaper than running separate clusters for each team. The isolation investment paid dividends in stability and security.