This guide demonstrates how to expose a Kubernetes Deployment using different Service types: ClusterIP, NodePort, and LoadBalancer.
1. Prerequisites: Sample Nginx Deployment
First, let's create a simple Nginx Deployment with 3 replicas.
Create nginx_deploy.yaml:
cat>nginx_deploy.yaml<<EOFapiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 # Note: nginx:1.7.9 is an older version. Consider using a more recent tag for new deployments. ports: - containerPort: 80EOF
Apply the Deployment and check Pods:
kubectlapply-fnginx_deploy.yamlkubectlgetpo-lapp=nginx# List pods with label app=nginx
2. Exposing the Deployment with Service Types
Now, we'll expose the nginx-deployment using different Service types. The Service uses the selector: app: nginx to find the Pods managed by our Deployment.
2.1 ClusterIP Service
A ClusterIP service exposes the application on an internal IP address within the cluster. This is the default Service type.
Create nginx_clusterip.yaml:
(Added targetPort for clarity, though it defaults to port if not specified and if the container port is also 80.)
Apply the ClusterIP Service and check Services:
2.2 NodePort Service
A NodePort service exposes the application on a static port on each Node's IP address.
Create nginx_nodeport.yaml:
(Added targetPort and commented nodePort example.)
Apply the NodePort Service and check Services:
(Note the assigned NodePort from the output. You can access this service via <NodeIP>:<NodePort>.)
2.3 LoadBalancer Service
A LoadBalancer service exposes the application externally using a cloud provider's load balancer.(Note: This Service type requires a cloud provider or an environment that can provision load balancers, like Minikube with minikube tunnel.)
Create nginx_loadbalancer.yaml:
(Added targetPort.)
Apply the LoadBalancer Service and check Services:
(It might take some time for the EXTERNAL-IP to be assigned by the cloud provider.)
Services work by watching Pods that match their selector and updating an Endpoints object (or using EndpointSlices in newer Kubernetes) with the IP addresses and ports of the ready Pods.
4. Pods Matching a Service Selector
If you create a new Pod with labels that match an existing Service's selector, that Pod will automatically become part of the Service and start receiving traffic.
The original heading mentioned "capture a password??", which is out of scope for a simple label matching demonstration. This section shows how a pod can be added to a service via labels.
Run a new Nginx pod with matching labels:
The my-nginx-clusterip Service uses the selector app: nginx.
(Corrected ningx to nginx. Simplified kubectl run for a basic pod.)
Observe the Endpoints:
Wait a few moments for the new pod to become ready.
You should see the IP address of the fish-nginx pod added to the list of endpoints for my-nginx-clusterip.
cat > nginx_clusterip.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: my-nginx-clusterip
spec:
ports:
- port: 80 # Service port
targetPort: 80 # Container port (where nginx is listening)
protocol: TCP
selector:
app: nginx
# type: ClusterIP # This is default, so can be omitted
EOF
kubectl apply -f nginx_clusterip.yaml
kubectl get svc my-nginx-clusterip
cat > nginx_nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: my-nginx-nodeport
spec:
type: NodePort
ports:
- port: 80 # Service port (internal ClusterIP will listen on this)
targetPort: 80 # Container port
# nodePort: 30080 # Optional: specify a NodePort within the valid range (e.g., 30000-32767). If omitted, a random one is assigned.
protocol: TCP
selector:
app: nginx
EOF
kubectl apply -f nginx_nodeport.yaml
kubectl get svc my-nginx-nodeport -o wide
cat > nginx_loadbalancer.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: my-nginx-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80 # Service port exposed by the LoadBalancer
targetPort: 80 # Container port
protocol: TCP
selector:
app: nginx
EOF
kubectl apply -f nginx_loadbalancer.yaml
kubectl get svc my-nginx-loadbalancer -o wide
# Describe the ClusterIP service to see its selector and IP
kubectl describe svc my-nginx-clusterip
# Get the Endpoints object associated with the service
# This shows the actual Pod IPs and ports that traffic will be routed to.
kubectl get endpoints my-nginx-clusterip
# Or, for more detail:
# kubectl get endpoints my-nginx-clusterip -o yaml
# Note the corrected label: app=nginx (not ningx)
kubectl run fish-nginx --image=nginx --labels="app=nginx"
kubectl get po -l app=nginx -o wide # You should see 'fish-nginx' and the deployment pods
kubectl get svc my-nginx-clusterip
kubectl get endpoints my-nginx-clusterip -o yaml # Check the 'subsets' list for the IP of 'fish-nginx'