This lab demonstrates how to set up Role-Based Access Control (RBAC) in Kubernetes. We will create two service accounts, developer and deployer, define distinct roles for them within a specific namespace, bind these roles to the service accounts, and then generate kubeconfig files to test their respective permissions.
1. Setup Namespace and Service Accounts
Define the target namespace:
exportNAMESPACE="app2"kubectlcreatenamespace$NAMESPACE# Ensure the namespace exists
Create Service Accounts:
Create two service accounts, developer and deployer, in the defined namespace.
Create a file named roles.yaml with the following content. This file defines a deployer role with broad permissions to manage application resources and a developer role with more restricted, mostly read-only permissions.
apiVersion:rbac.authorization.k8s.io/v1# Updated from v1beta1kind:Rolemetadata:name:deployernamespace:app2# Should match $NAMESPACErules:-apiGroups:["","apps","extensions","networking.k8s.io"]# More specific apiGroupsresources:["deployments","replicasets","statefulsets","daemonsets","configmaps","pods","secrets","services","ingresses","persistentvolumeclaims"]verbs:["create","get","delete","list","update","patch","watch"]-apiGroups:[""]# For core resources like pods/log, pods/execresources:["pods/log","pods/exec"]verbs:["get","list","watch","create"]# 'create' for exec--- apiVersion:rbac.authorization.k8s.io/v1# Updated from v1beta1kind:Rolemetadata:name:developernamespace:app2# Should match $NAMESPACErules:-apiGroups:["","apps","extensions","networking.k8s.io"]# More specific apiGroupsresources:["deployments","replicasets","statefulsets","daemonsets","pods","services","ingresses","configmaps"]# Common read-only resourcesverbs:["get","list","watch"]-apiGroups:[""]resources:["pods/log"]verbs:["get","list","watch"]-apiGroups:[""]resources:["pods/exec"]verbs:["create","get"]# 'create' for exec, 'get' might be needed by some tools
(Note: Updated apiVersion to v1. Specified namespace directly in Role metadata. Refined apiGroups and resources for clarity and better practice than ["*"] for resources, though the original intent was broad. Reduced verbs for developer to be more realistic for read-only + exec/log.)
The original vi roles.yaml implies manual creation. This block represents the content of that file.
3. Apply Roles
Apply the defined roles to the cluster within the specified namespace.
4. Define Role Bindings (bindings.yaml)
Create a file named bindings.yaml with the following content. This binds the deployer Role to the deployer ServiceAccount and the developer Role to the developer ServiceAccount.
(Note: Specified namespace in RoleBinding metadata and for the ServiceAccount subject for clarity and explicitness.)
The original vi bindings.yaml implies manual creation. This block represents the content of that file.
5. Apply Role Bindings
Apply the defined role bindings to the cluster.
6. Prepare Kubeconfig Template (kubeconfig.tpl)
Create a template file named kubeconfig.tpl for generating kubeconfig files.
(Note: The original vi kubeconfig.tpl means the content is as shown. I've wrapped it in cat << EOF > kubeconfig.tpl for scriptability and escaped $ variables meant for envsubst.)
7. Generate and Test Kubeconfig for deployer
Set environment variables for deployer:
Note on Service Account Tokens: The following commands to retrieve service account tokens are for Kubernetes versions prior to 1.24.
For Kubernetes 1.24 and later, secrets are not automatically created for service accounts. You should use kubectl create token deployer -n $NAMESPACE to get the token directly.
If using an older version:
If using Kubernetes 1.24+:
Continue with cluster information gathering:
(Simplified jq to jsonpath for some commands, added note about token generation, and made cluster info gathering more robust by picking the first cluster if multiple exist, though typically there's one for kubectl config view --raw.)
Generate kubeconfig_app2_deployer:
Test deployer permissions:
The deployer will attempt to list nodes and manage resources within its namespace.
Unset KUBECONFIG:
This is crucial to return to your default admin context.
8. Generate and Test Kubeconfig for developer
Set environment variables for developer:
Note on Service Account Tokens: (Same note as for deployer applies here regarding K8s versions)
If using an older version:
If using Kubernetes 1.24+:
Cluster information variables (CLUSTER_NAME, CLUSTER_CA, CLUSTER_ENDPOINT) should still be set from the previous step.
Generate kubeconfig_app2_developer:
Test developer permissions:
The developer should be able to list pods in $NAMESPACE but not list nodes or create deployments.
The original kubectl get po -n app2 is redundant if $NAMESPACE is "app2", but it correctly tests if the developer can see pods in their assigned namespace.
# Attempt to list nodes. This is a cluster-scoped operation.
# It will fail if the 'deployer' service account only has the namespaced 'deployer' Role
# and no separate ClusterRole/ClusterRoleBinding granting node list permissions.
kubectl get nodes
# List pods in the assigned namespace (should succeed)
kubectl get pods -n $NAMESPACE
# Try creating and deleting a simple deployment (should succeed)
kubectl create deployment nginx-test --image=nginx -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
kubectl delete deployment nginx-test -n $NAMESPACE
unset KUBECONFIG
export NAME="developer"
export SECRET=$(kubectl get sa developer -n $NAMESPACE -o jsonpath='{.secrets[0].name}')
export USER_TOKEN=$(kubectl get secret $SECRET -n $NAMESPACE -o jsonpath='{.data.token}' | base64 -d)
# export USER_TOKEN=$(kubectl create token developer -n $NAMESPACE) # Use this for K8s 1.24+
# List pods in the assigned namespace (should succeed)
kubectl get pods -n $NAMESPACE
# Attempt to list nodes (should fail, as this is cluster-scoped and developer role is namespaced)
kubectl get nodes
# Try creating a deployment (should fail)
# The following command generates YAML. The actual failure would be if you pipe this to `kubectl apply -f -`.
# Based on the 'developer' role, this user should not have 'create' permission for deployments.
kubectl create deployment nginx-dev-test --image=nginx -n $NAMESPACE --dry-run=client -o yaml
# Test exec into a pod (should succeed if pods exist in $NAMESPACE and role allows 'pods/exec')
# Example (uncomment and replace <pod-name>):
# kubectl exec -it <pod-name> -n $NAMESPACE -- sh