Kubernetes is an open source orchestration system for automating the management, placement, scaling, and routing of containers. It has become very popular with developers and IT operations teams for running containerized applications reliably at scale. Kubernetes works across clusters of machines (or nodes), supports rolling updates, self-healing of failed components, load balancing, resource quotas, and more.
You can read more about Kubernetes from the official Kubernetes website: https://kubernetes.io/
When to choose Kubernetes instead of Docker Compose:
Feature | Docker Compose | Kubernetes |
---|---|---|
Scope | Single host, local development, small projects | Multi-host clusters, production-scale systems |
Scaling | Manual (docker-compose up --scale) | Automatic horizontal scaling based on load |
High availability | Not built-in, limited to one host | Self-healing, rescheduling, replication across nodes |
Deployment strategies | Basic start/stop and restart | Rolling updates, canary releases, blue-green deployments |
Networking | Simple bridge networks on one machine | Cluster-wide service discovery, internal DNS, load balancing |
Storage | Volumes mapped to host directories | Persistent volumes across nodes, dynamic provisioning |
Use case | Best for development, prototyping, small apps | Best for production workloads, large and distributed systems |
Before deploying applications, you need to enable Kubernetes in Docker Desktop.
Open Docker Desktop → Settings → Kubernetes, check "Enable Kubernetes", and wait until the status changes to Kubernetes is running.
Once enabled, you can use the kubectl
command-line tool to manage your local cluster.
This example shows how to deploy a simple Nginx web server using Kubernetes.
kubectl get nodes
You should see one node listed (the Docker Desktop node).
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
kubectl get deployments
kubectl get pods
kubectl get services
You should see two pods (because replicas: 2
) and a service called nginx-service
.
Because Docker Desktop Kubernetes does not create an external LoadBalancer, you can use port forwarding:
kubectl port-forward service/nginx-service 8080:80
Now open your browser at http://localhost:8080 and you should see the Nginx welcome page.
In this example, Kubernetes is used to manage the deployment of an Nginx web server. Instead of running a single container manually, Kubernetes ensures that two replicas of the Nginx container are always running. If one of them fails, Kubernetes will automatically restart it. The Service definition makes the application accessible inside the cluster and, with port forwarding, available on your local machine. This demonstrates why Kubernetes is useful: it provides automation, scaling, and reliability, which are difficult to achieve when starting containers manually with Docker commands or Docker Compose.
When you no longer need the Nginx example, you can stop and remove the Kubernetes resources without shutting down the entire Docker Desktop Kubernetes cluster.
1. Delete the Kubernetes resources
kubectl delete -f nginx-service.yaml
kubectl delete -f nginx-deployment.yaml
This removes both the Deployment (and its Pods) and the Service. You can verify with:
kubectl get pods
kubectl get services
2. Remove the Docker image (optional)
docker rmi nginx:latest
This deletes the nginx:latest
image from your local Docker cache. If it is still in use, Docker will prevent removal until all related containers are stopped and removed.
3. Do you need to stop the Kubernetes cluster?
No. There is no need to stop the Kubernetes cluster itself in Docker Desktop. It can safely keep running, and you only remove the resources you created. You may disable the cluster only if you want to save system resources when Kubernetes is not in use.